Sunday, December 23, 2012

The prospects for economics? Don't bank on them

Perhaps I am simply trying to discharge all my rants before Christmas. But I did get a bit peeved at the rather mindless way in which the Queen’s ‘briefing’ on the financial crisis during her visit to the Bank of England was reported by all and sundry – which triggered this blog piece for Prospect. The comment from Peter Whipp is priceless. I’m not sure even Milton Friedman would have gone quite that far.

Thursday, December 20, 2012

Maths of the pop-up tent

Here’s my latest news story for Nature. This image shows a wood sculpture made by the paper’s authors to illustrate their principles. It’s worth seeing the video of the folding process on the Nature site too – this is one of those problems that is much more easily expressed in images than in words.

____________________________________________________

Ever wrestled with a pop-up tent, trying to fold it back up to fit in the bag? Help is at hand, in the form of a mathematical theory to describe the shapes adopted by the kinds of stressed flexible rings from which these tents are made [1]. As a result, says team leader Alain Jonas of the Catholic University of Louvain in Belgium, “we have found the best way to fold rings”.

Rings buckled into three-dimensional shapes crop up in many everyday contexts. They are used not just for pop-up tents but to make laundry baskets, small soccer goals, and some wood and creased-origami sculptures, as well as appearing inadvertently in bicycle wheels with too-tight spokes.

Jonas and his collaborators also report microscopic versions made from rings less than a millimetre across carved by electron beams out of thin double-layer films of aluminium and silicon nitride. Because the packing of atoms in the two materials doesn’t match, the films become strained when stuck back to back, inducing buckling.

In all these cases, the shapes adopted by the rings look rather similar. Typically, two opposite edges might buckle upwards, producing a kind of saddle shape. In pop-up tents, these buckles can be bent further by hand to fold the single large ring into a coil of smaller rings.

The researchers show that all these shapes can be predicted accurately with a theory that invokes a single key mathematical concept: what they call ‘overcurvature’, which is the amount by which a circular ring is made more curved than a perfect circle. For a folded, coiled pop-up tent, for example, the final coils have more total curvature than the unfolded single ring would have.

Equivalently, one can introduce overcurvature by adding segments of arc to a circle. The researchers do this experimentally by cutting out coils of a Slinky spring and joining them together in a single ring. This allows them to study the shapes that overcurvature produces, and compare it to their mathematical theory describing the stresses that appear in such a ring and make it buckle. They can figure out, say, how many arcs need to be joined together to guarantee buckling into a specific shape – just the thing that a bent-wood sculptor might like to know.

“They find a universal family of shapes that can be produced in frustrated rings”, explains Basile Audoly of the Institute de Mécanique d’Alembert in Paris. “This is why the folded tent looks like the Slinky and the creased origami.”

The results can be used to work out the easiest folding pathways to collapse a single overcurved ring into a small coil – the problem of folding a pop-up tent. “It’s not trivial to find this pathway empirically”, says Jonas. “You naturally start by deforming the ring in two lobes, since this is easiest. But then you have to deform the ring further into shapes that require a lot of energy.”

In contrast, he says, “if you take the pathway we propose, you have to use more energy at the start, but then have to cross lower energy barriers to reach the energy valley of the ring coiled in three” – you don’t get trapped by following the path of initial least resistance. The researchers provide a detailed route for how best to reach the three-ring compact form.

They also show that such a ring can be made even more compact, folded into five rings instead of three. “This is more difficult, because the energy barriers are higher”, Jonas admits, saying that for a tent it would be best to have three people on the job. He sees no reason why this shouldn’t work for real tents, provided that the pole material is flexible and strong enough.

Jonas thinks that the results might also apply on the molecular scale to the shapes of some relatively stiff molecular rings, such as small, circular bacterial chromosomes called plasmids. Their shapes look similar to those predicted for some ring-shaped polymers squashed into spherical shells [2].

“There is a lot of interest currently in this kind of fundamental mechanical problem”, says Audoly, who points out that rather similar and related findings have been reported by several others [3-7]. For example, he says, the same question has been related to the buckled fringes at the edges of plant leaves, where tissue growth can outstrip the overall growth rate of the leaf to create excess ‘edge’ that must become folded and rippled as a result [3,4]. However, Jonas says that, compared to earlier work on such problems, finding that just the single parameter of overcurvature will describe the mechanical problem “has the virtue of allowing us to find general laws and provide easy-to-use designing tools.”

References
1. Mouthuy, P.-O., Coulombier, M., Pardoen, T., Raskin, J.-P. & Jonas, A. M. Nat. Commun. 3, 1290 (2012).
2. Ostermeir, K., Alim, K. & Frey, E. Phys. Rev. E 81, 061802 (2010).
3. Sharon, E., Roman, B., Marder, M., Shin, G.-S. & Swinney, H. L. Nature 419, 579 (2002).
4. Marder, M., Sharon, E. Smith, S. & Roman, B. Europhys. Lett. 62, 498-504 (2003).
5. Moulton, D. E., Lessinnes, T. & Goriely, A. J. Mech. Phys. Solids doi/10.1016/j.jmps.2012.09.017 (2012).
6. Audoly, B. & Boudaoud, A. Comptes Rendus Mecanique 330, 831-836 (2002).
7. Dias, M. A., Dudte, L. H., Mahadevan, L. & Santangelo, C. D. Phys. Rev. Lett. 109, 114301 (2012).

Tuesday, December 18, 2012

The problem with opera

I have been enjoying David Moser’s classic rant about the difficulties of learning Chinese, to which Alan Mackay directed me, presumably after my recent piece for Nature on how reading ideograms and alphabetical words use essentially the same parts of the brain. A lot of what Moser says rings bells – my Chinese teachers too occasionally forget how to write (rare) characters, and more often make little slips with the pinyin system (so is that tan or tang?). And I too had tended to dismiss these as just universal lapses of memory, rather overlooking the fact that this was their language, not mine. I’m glad Moser agrees that abolishing characters isn’t a viable solution, not least because the Chinese orthographic system is so beautiful. But what chimes most is that this is a problem that simply doesn’t register with Chinese people.

And that, it strikes me – and to change the subject rather lurchingly – is just how it is too with fans of opera. Reading a nice review by Philip Hensher of a new history of opera by Carolyn Abbate and Roger Parker, the penny dropped that this is how I struggle with opera. It has its moments, but in musical, theatrical and literary terms opera as we have received it has some very deep-seated problems that seem to remain utterly invisible to aficionados. That is why it was a huge relief to see Hensher, who is evidently an avid opera buff, bring these out into the open. Ask many fans what they love in opera, and they are likely to start talking about how it brings together the highest art forms – music, literature and theatre – in one glorious package. It astounds me that they remain oblivious to the profound difficulties that union presents – if not inevitably, then certainly in practice.

For a start: opera traditionally has crap plots and terrible acting. It’s not, I think, ignorant philistinism that prompts me to say this, since the view is shared by Jonathan Miller, who says that 90 percent of operas aren’t worth bothering with. Miller makes no bones about the efforts he has had to make, in directing operas, to suppress the ridiculous gestures that his performers would insist on making. His comments remind me of once watching a trained dancer in an acting class. The chap was asked to walk across the stage in a neutral way. He couldn’t do it. His body insisted on breaking into the most contrived and stylized preening, even though he’d walk down the corridor after the class just like anyone else. His training, like that of opera singers, was doubtless exquisite. It was, however, a training evidently bent on obliterating his ability to move like a normal human being. Now, opera lovers will insist that things have got better over the past several decades – opera singers actually learn something about acting now, not simply a catalogue of absurd symbolic gestures – and this is true. But it’s a slow process, and in some ways you could regard it as a ‘de-operafication’ of opera.

The same with voice. Even Hensher seems to regard opera singing as the highest pinnacle of refinement in the use of the human voice. It’s very, very hard, to be sure, but it is also utterly stylized. This is not how people sing, it is how opera is sung. That seems abundantly obvious, but opera buffs seem to have no notion of it, even though no one sings that way until they have been trained to. There are reasons for it (which I’m coming to) – but operatic singing is a highly contrived, artificial form of human vocal performance, and as such it is emotionally constrained as much as it is expressive – the emotions, like the physical gestures, are stylized conventions. That’s not necessarily a bad thing, but it is bizarre that this evident fact is not even noticed by most opera lovers. Hensher puts it, gnomically, like this: “Opera survives in a safe, hermetic, sealed condition of historic detachment, where emotion can be expressed directly because it is incomprehensible, remote and stylised.” I’m still working on that sentence – how can emotion can be especially ‘direct’ precisely because it is ‘remote’ and ‘stylised’?

Plot – oh, don’t get me started. It’s too easy a target. Even opera lovers admit that most of the plots suck. Now, it’s often said that this is one of the necessary sacrifices of the art: if all the lines are sung, the action must be pretty simple. If that is so, we must already then concede that there’s some erosion of the theatrical substance. However, it doesn’t have to be that way. People can sing complex dialogue quite audibly in principle. They do it in musicals all the time. If you want to hear complex, incredibly moving emotion sung, you need only listen to Billie Holliday (or Nina Simone) singing "Strange Fruit". The fact is, however, that these things can’t be done if sung operatically, in particular because the operatic voice reduces the audibility of the words. As Hensher asks, “why harness a drama to music of such elaboration that it will always get in the way of understanding?” (though actually it’s not the music but the vocalization). He doesn’t answer that question, but I’m mighty glad he raises it. Composers themselves have acknowledged this problem, even if indirectly: it has been shown, for example, that Wagner seems to have intentionally matched the pitch of vowels in his (self-penned) libretti to the frequencies at which the vocal tract resonates when they are spoken in normal speech, making them somewhat more intelligible. (At the very highest frequencies of a female soprano, all vowels tend to sound like ‘ah’.) In other words, Wagner was wrestling with the limitations on communication that opera had chosen to impose on itself.

Why did it do that? Because of a misconceived idea, in the early development of opera in the sixteenth century, that the cadences of speech can be rendered in music. They can’t: the irregular rhythms, indefinite contours and lack of melody make it unlike musical melody, even if there are other intriguing parallels. Opera has kind of accepted this, which is why, as Hensher points out, the form became one “in which lyric utterances of a single significance alternate with brisk, less melodic passages”. Or to put it another way, we get some fabulous arias in which nothing much is said beyond “I’m heartbroken” or “I love you”, interrupted by unmusical recitative which audiences have so much learned to put up with that they barely seem to register that they are at such times not having a ‘musical’ experience at all, but rather, an operatic one. The nineteenth century music critic Eduard Hanslick puts it delicately: “in the recitative music degenerates into a mere shadow and relinquishes its individual sphere of action altogether.” To put it another way, as “music” those parts are gibberish.

Again, this is a choice. It is one that has historical reasons, of course – but for many opera fans it seems again simply to have become invisible, which strikes me as at least a little odd. Hensher says it nicely: “If you were going to design an art form from scratch, you'd be able to improve in a good few ways on opera as we have inherited it.” And I accept that we don’t have the option of starting again from scratch – but we do have the luxury of acknowledging the room for improvement.

In her review of the same book in Prospect, Wendy Lesser seems initially to be demonstrating the same refreshing sensibility: “Opera must be one of the weirdest forms of entertainment on the planet. Its exaggerated characters bear little relation to living people, and its plots are often ludicrous.” But it soon becomes clear that Lesser doesn’t really get the point at all. She quotes Abbate and Parker as saying “the whole business is in so many ways fundamentally unrealistic, and can’t be presented as a sensible model for leading one’s life or understanding human behaviour.” Hang on – you are going to the opera for that? Is that why we go to the theatre either, for goodness’ sake? Lesser soon shows that her talent for ducking the issue is remarkable; she becomes just like the Chinese people who frustrate Moser with their pride at the sheer difficulty of the language – “Yes, it’s the hardest in the world!” But, he longs to say, doesn’t that strike you as a problem? Ludicrous plots, exaggerated characters – hey, aren’t we strange to like this stuff? Well no, it’s just that there seems no particular reason to celebrate those flaws, even if you like opera in spite of them. Lesser presents the “huge acoustic force” of the opera voice as another lovable oddity, but doesn’t seem to recognize that the historical necessity of attaining such volume creates a distance from regular human experience and compromises intelligibility.

I know people who have large collections of opera recordings. Perhaps they use them to compile collections of their favourite arias – why not? But my impression is that they hardly ever put on an opera and listen to it all the way through, as we might a symphony. Now, call me old-fashioned, but I still have this notion that music is something you want to listen to because it works as a coherent whole. Opera is something else: a thing to be experienced in the flesh, as stylized and refined as Noh or Peking opera (but a fair bit more expensive). Opera is indeed an experience, and Hensher encapsulates the intensity and romance of that experience brilliantly. I only ask that, especially since opera dominates the classical music reviews to such a degree, we remember to ask: what sort of experience is it, exactly? Neither primarily musical, nor lyrical, nor theatrical – but operatic.

Maybe I’m just frustrated that in the end I know it is my loss that I sit entranced through the Prelude of Tristan and Isolde and then roll my eyes when the singing starts. I know from enough people whose judgement I trust what delights await in the operatic repertoire (you’re pushing at an open door as far as Peter Grimes is concerned). It’s just the failure of opera lovers to notice the high cost of entry (in all respects) that confounds me.

Friday, December 14, 2012

New articles

I have an article in Nature on thermal management of computers, which is also available online via Scientific American. Before I spoke to folks at IBM about this, I’d have imagined it to be deadly dull. I hope you’ll agree that it isn’t at all – in fact, it strikes me as perhaps the big potential roadblock for computing, though talked about far less than the question of how to keep on miniaturizing.

I also have an article on supercapacitors in MRS Bulletin, which can be seen here. But I have just put a longer version on my website (under Materials) which contains the references chopped out of the published version. This follows on from an article in the MRS Bulletin September Energy Quarterly on the use of supercapacitors in transport in Germany, which can be downloaded here.

In fact I have just put a few new articles up on my web site, hopefully with more to follow. Oh, and as well as writing for the ultra-nerdy MRS Bulletin, I have done a piece on emotion in music for the ‘supermarket magazine’ The Simple Things, for which you can see a sampler here. Nothing like variety. I’ll stick the pdf up on my website soon.

Tuesday, December 11, 2012

Crystallography's fourth woman?

Here is a book review just published in Nature, with some bits reinserted that got lost in the edit.

___________________________________________

I Died For Beauty: Dorothy Wrinch and the Cultures of Science by Marjorie Senechal
Oxford University Press
3 Dec 2012 (UK Jan 2013)
304 pages
$34.95

X-ray crystallography and the study of biomolecular structure was one of the first fields of modern science in which women scientists came to the fore. Dorothy Hodgkin, Rosalind Franklin and Kathleen Lonsdale are the best known of the women who made major contributions in the face of casual discrimination and condescension. In I Died for Beauty Marjorie Senechal suggests that there was nearly a fourth: Dorothy Wrinch, a name that few now recognize and that is often derided by those who do.

The late protein chemist Charles Tanford, for instance, has poured scorn on Wrinch’s best-known work, the ‘cyclol theory’ of protein structure, proposed in the 1930s. It was, he said, “not really worth more than a footnote, a theory built on nothing, no training, no relevant skills”, which gained visibility only thanks to the “sheer bravura (chutzpah) of the author”. Of Wrinch herself, he proclaimed “she was arrogant and felt persecuted when criticized, but in retrospect her miseries seem self-inflicted.”

In an attempt to rebalance such attacks, Senechal, a former assistant of Wrinch’s at Smith College in Massachusetts and now coeditor of The Mathemetical Intelligencer, has written no hagiography, but rather, a sympathetic apologia. Whatever one feels about Wrinch and her research, she is a fascinating subject. Her circle of friends, colleagues and correspondents reads like a who’s who of early twentieth-century science and philosophy. Wrinch, a Cambridge-trained mathematician, was a student of Bertrand Russell, was championed by D’Arcy Thompson and Irving Langmuir, worked alongside Robert Robinson and knew Niels Bohr, G. H. Hardy, Kurt Gödel and John von Neumann. Several of them considered her brilliant, although one wonders how much this reflected her ambition and force of personality than actual achievements. Nonetheless, calling for mathematicians to interest themselves in biology, Thompson says in 1931 that “I do not know of anyone so well qualified as Dr Wrinch.” The polymathic mathematician and geophysicist Harold Jeffreys developed some of his ideas on statistical reasoning in collaboration with Wrinch at Cambridge, and wrote in Nature in 1976 of “the substantial contribution she made to this [early] work, which is the basis of all my later work on scientific inference.”

Senechal’s central question is: what went wrong? Why did so apparently promising a figure, a member of the pioneering Theoretical Biology club that included Joseph Needham, J.D.Bernal and Conrad Waddington, end up relegated to obscurity?

The too-easy answer is: Linus Pauling. When Pauling, in a 1939 paper, comprehensively destroyed Wrinch’s cyclol theory – which argued that globular proteins are polyhedral shells, in which amino acids link into a lattice of hexagonal rings – he finished her career too. Senechal clearly feels Pauling was bullying and vindictive, although her attempt at revenge via Pauling’s cavalier dismissal of Dan Schechtman’s quasicrystals doesn’t make him any less right about proteins.

But a more complex reason for Wrinch’s downfall emerges as the story unfolds. Part of her undoing was her magpie mind. Seemingly unable to decide how to use her substantial abilities, Wrinch never really made important contributions to one area before flitting to another — from Bayesian statistics to seismology, topology to mitosis. Warren Weaver, the astute director for natural sciences at the Rockefeller Foundation that funded Wrinch for some years, offered an apt portrait: “W. is a queer fish, with a kaleidoscopic pattern of ideas, ever shifting and somewhat dizzying. She works, to a considerable extent, in the older English way, with heavy dependence on ‘models’ and intuitive ideas.”

Senechal presents a selection of opinions the Foundation collected on her while assessing her funding application, many deeply unflattering: she is a fool, she is mad or ‘preachy’, she dismisses facts that don’t fit and poaches others’ ideas. Frustratingly, we’re left to decide for ourselves told how much of this is justified, but even Senechal admits that a little of Wrinch went a long way. Her wearisome habits were noted by science historian George Sarton’s daughter in an account of a London tea in 1937: “Dorothy Wrinch was there in one of her strange, simpering showing off moods, talking about herself constantly.” The evidence for a problematic personality gradually piles up.

She certainly had a talent for making enemies. “Everyone in England in on or near the protein field is more than antagonistic to her,” said one of the Rockefeller interviewees. Bernal was incensed when Wrinch tried to argue that the diffraction data obtained by his student Hodgkin supported her cyclol theory – an assertion that was sloppy at best, and perhaps dishonest. In retaliation Wrinch called Bernal “jealous, brutal and treacherous”. (Hodgkin, true to form, was charitably forgiving.)

Underlying all of this is the position of Wrinch as a female scientist. Like many educated women of the 1930s, she felt motherhood as a burden and barrier that only extreme measures could relieve. Her eugenic inclinations and call, in her pseudonymous The Retreat from Parenthood (1930), for a state-run Child Rearing Services that farmed out children to professional carers reinforce the fact that Aldous Huxley was only writing what he heard. Alarming though her behaviouralist approach to parenting might now sound (Senechal rather sidesteps Wrinch’s relationship with her daughter Pamela, who died tragically in a house fire aged 48), it is shameful that the professional structures of science have hardly made it any easier for mothers some 80 years on.

Her central problem, it seems, was that, working at a time when most male scientists assumed that women thought differently from them, Wrinch seemed to conform to their stereotype: headstrong, stubborn, strident, reliant on intuition rather than facts. It is clear in retrospect that those complaints could also be made of Wrinch’s arch-enemy Pauling: Senechal rightly observes that “Dorothy and Linus were more alike than either of them ever admitted.” She sees injustice in the way Pauling’s blunders, such as denying quasicrystals, were forgiven while Wrinch’s were not.

Was there a hint of sexism here? In this case I doubt it – Pauling of course, unlike Wrinch, hit more than enough bullseyes to compensate. But Senechal’s imagined scene of braying men and their snickering wives poring over Pauling’s devastating paper has a depressing ring of truth.

Primarily a mathematician herself, Senechal doesn’t always help the reader understand what Wrinch was trying to do. Her interest in “the arrangement of genes on the chromosome” sounds tantalizingly modern, but it’s impossible to figure out what Wrinch understood by that. Neither could one easily infer, from Senechal’s criticisms of Pauling’s attack, that the cyclol theory was way off beam even then. Tanford has pointed out that it predicted protein structures that were “sterically impossible” – the atoms just wouldn’t fit (although cyclol rings have now been found in some natural products). Fundamentally, Wrinch was in love with symmetry – to which the title, from an Emily Dickinson poem, alludes. It was this that drew her to crystallography, and her 1946 book Fourier Transforms and Structure Factors is still esteemed by some crystallographers today. But such Platonic devotion to symmetrical order can become a false refuge from the messiness of life, both in the biochemical and the personal sense.

Senechal’s prose is mannered, but pleasantly so — a welcome alternative to chronological plod. Only occasionally does this grate. Presenting the battle with Pauling in the form of an operatic synopsis is fun, but muddies truth and invention. The account of Wrinch’s first husband John Nicholson’s breakdown in 1930 is coy to the point of opacity.

It’s tremendous that Senechal has excavated this story. She offers a gripping portrait of an era and of a scientist whose flaws and complications acquire a tragic glamour. It’s a cautionary tale for which we must supply the moral ourselves.

Friday, November 30, 2012

Massive organ

I know, this is what Facebook was invented for. But I haven't got my head round that yet, so here it is anyway. It will be a Big Noise. Café Oto is apparently the place to go in London for experimental music, and there's none more experimental than this. Andy Saunders of Towering Inferno has put it together. Who? Look up their stunning album Kaddish: as Wiki has it, "It reflects on The Holocaust and includes East European folk singing [the peerless Márta Sebestyén], Rabbinical chants, klezmer fiddling, sampled voices (including Hitler's), heavy metal guitar and industrial synthesizer. Brian Eno described it as "the most frightening record I have ever heard"." Come on!

The American invasion

I have a little muse on the Royal Society Winton Science Book Prize on the Prospect blog. Here it is. It was a fun event, and great to see that all the US big shots came over for it. My review of Gleick’s book is here.

_____________________________________________________________

Having reviewed a book favourably tends to leave one with proprietary feelings towards it, which is why I was delighted to see James Gleick’s elegant The Information (Fourth Estate) win the Royal Society Winton Science Book Prize last night. Admittedly, Gleick is not an author who particularly needs this sort of accolade to guarantee good sales, but neither did most of the other contenders, who included Steven Pinker, Brian Greene and Joshua Foer. Pinker’s entry, The Better Angels of Our Nature (Penguin) was widely expected to win, and indeed it is the sort of book that should: bold, provocative and original. But Gleick probably stole the lead for his glorious prose, scarcely done justice by the judging panel’s description as having “verve and fizz”. For that, go to Foer.

Gleick has enjoyed international acclaim ever since his first book in 1987, Chaos, which introduced the world to the ‘butterfly effect’ – now as much of a catchphrase for our unpredictable future as Malcolm Gladwell’s ‘tipping point’. But in between then and now, Gleick’s style has moved away from the genre-fiction potted portraits of scientists (“a tall, angular, and sandy-haired Texas native”, “a dapper, black-haired Californian transplanted from Argentina”), which soon became a cliché in the hands of lesser writers, and has matured into something approaching the magisterial.

And might that, perhaps, explain why five of the six finalists for this year’s prize were American? (The sixth, Lone Frank, is a Danish science writer, but sounds as though she learnt her flawless English on the other side of the pond.) There have been American winners before, Greene among them, but most (including the past four) have been British. Maybe one should not read too much into this American conquest – it just so happened that three of the biggest US hitters, as well as one new Wunderkind, had books out last year. But might the American style be better geared to the literary prize?

There surely is an American style: US non-fiction (not just in science writing) differs from British, just as British does from continental European. (Non-British Europeans have been rare indeed in the science book shortlists.) They do grandeur well, in comparison to which even our popular-science grandees, such as Richard Dawkins, Steve Jones and Lewis Wolpert, seem like quiet, diligent academics. The grand style can easily tip into bombast, but when it works it is hard to resist. Just reading the list of winners of the Pulitzer Prize for Non-Fiction makes one feel exhausted – no room here for the occasional quirkiness of the Samuel Johnson.

This year’s science book prize shortlist was irreproachable – indeed, one of the strongest for years. But it will be interesting to see whether, in this straitened time for writers, only the big and bold will survive.

Tuesday, November 27, 2012

The universal reader



This is the pre-edited version of my latest, necessarily much-curtailed news story for Nature.

_____________________________________________________________

New study suggests the brain circuits involved in reading are the same the world over

For Westerners used to an alphabetic writing system, learning to read Chinese characters can feel as though it is calling on wholly new mental resources. But it isn’t, according to a new study that uses functional magnetic-resonance imaging (fMRI) to examine people’s brain activity while they read. The results suggest that the neural apparatus involved in reading might be common to all cultures, despite their very different writing systems, and that culture simply fine-tunes this.

Stanislas Dehaene of the National Institute of Health and Medical Research in Gif-sur-Yvette, France, and his coworkers say that reading involves two neural subsystems: one that recognizes the shape of the words on the page, and the other that decodes the physical motor gestures used to make the marks.

In their tests of French and Chinese subjects, they found that both groups use both systems while reading their native language, but with different emphases that reflect the different systems of writing. They describe their findings today in the Proceedings of the National Academy of Sciences USA [1].

“Rather than focusing on ear and eye in reading, the authors rightly point out that hand and eye are critical players”, says Uta Frith, a cognitive neuroscientist at University College London. “This could lead into novel directions – for instance, it might provide answers why many dyslexics also have very poor handwriting and not just poor spelling.”

Understanding how the brain decodes symbols during reading might not only offer clues into the origin of learning impairments such as dyslexia, but also inform learning strategies for general literacy and how these might be attuned to children or adults.

It has been unclear whether the brain networks responsible for reading are universal or culturally distinct. Some previous studies have suggested that alphabetic (such as French) and logographic (such as Chinese, where single characters represent entire words) writing systems might engage different networks.

There is evidence that all cultures use a shape-recognition region in the brain’s posterior left hemisphere, including in particular a so-called visual word-forming area (VWFA). But some research has implied that Chinese readers also use other brain networks that are unimportant for Western readers – perhaps because the Chinese logographic system places great emphasis on the order and direction of the strokes that make up a character, thereby engaging a ‘motor memory’ for writing gestures.

Dehaene and colleagues suspected that such motor aspects of reading are universal. Some educators have long advocated this: the Montessori method, for example, uses sandpaper letters that children can trace with their fingers to reinforce the gestural aspects of letter recognition. Motor processing is evidently universal for writing, involving a brain region known as Exner’s area, and the researchers postulated that this is activated in reading too, to interpret the gestures assumed to have gone into making the marks.

To examine what the brain is up to during reading, Dehaene and colleagues used fMRI to monitor brain activity in French and Chinese subjects reading words and characters in their own language in cursive script. They asked the subjects to recognize the words and recorded their response times.

However, unbeknown to the subjects, their responses were being manipulated in subtle ways by a process called ‘priming’. Before the word itself was presented on a screen, the subjects saw other words or symbols flashed up in just 50 milliseconds – too short a time, in general, for them to be registered consciously.

These subliminal images prepared the brain for the target word. If one of them was identical to the target word itself, subjects recognized the true target more quickly. The ‘masked’ images could also show ‘nonsense’ words written with the strokes progressing in the usual (forward) direction, or as the reverse (backward) of the usual gestural direction. Moreover, the targets could be shown either as static images or dynamically unfolding as though being written – both forwards and backwards. Finally, the target could also be distorted, for example with the letters unnaturally bunched up or the strokes slightly displaced.

The researchers used these manipulations both to match the amount of stimulus given to the subjects for the very different scripts of French and Chinese, and to try to isolate the different brain functions involved in reading. For example, spatial distortion of characters disrupts the VWFA involved in shape recognition, while words appearing dynamically stimulates Exner’s area (the motor network), but this network gets thrown if the words seem to be being written with backwards gestures. In each case, such disruptions slow the response time.

Dehaene and colleagues found that the same neural networks – the VWFA and Exner’s area – were indeed activated in both French and Chinese subjects, and could be isolated using the different priming schemes. But there were cultural differences too: for example, static distortion of the target slowed down recognition for the French subjects more than the Chinese, while the effects of gestural direction were stronger for the Chinese.

The researchers suspect that the gestural system probably plays a stronger role while the VWFA has not fully matured – that is, in young children, supporting the idea that reinforcement via the motor system can assist reading. “So far the motor decoding side has been rather neglected in reading education,” says Frith.

“It is conceivable that you find individuals where one system is functioning much better than the other”, she adds. “This may be a source of reading problems not yet explored. In the past I have studied people who can read very well but who can't spell. Perhaps the spelling aspect is more dependent on kinetic memories?”

However, psycholinguist Li-Hai Tan at the University of Hong Kong questions how far these results can be generalized to non-cursive printed text. “Previous studies using printed non-cursive alphabetic words in general have not reported activity in the gesture recognition system of the brain”, he says. “However, this gesture system has been found in fMRI studies with non-cursive Chinese characters. The motor system plays an important role in Chinese children's memory of characters, whether cursive or not.”

The universality of the ‘reading network’, say Dehaene and colleagues, also supports suggestions that culturally specific activities do not engage new parts of the brain but merely fine-tune pre-existing circuits. “Reading thus gets a free ride on ancient brain systems, and some reading systems are more user-friendly for the brain”, says Frith.

Reference

1. Nakamura, K. et al., Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1217749109 (2012).

Monday, November 26, 2012

Faking Moby's fragrance

Here’s my latest piece for the BBC’s Future site. God, it is nice to have the luxury of indulging in some nice context without having to get to the news in the first breath. Indeed, it’s part of the thesis of this column that context can be key to the interest of a piece of work.

___________________________________________________________________

Smelling, as the New York Times put it in 1895, “like the blending of new-mown hay, the damp woodsy fragrance of a fern-copse, and the faintest possible perfume of the violet”, the aromatic allure of ambergris is not hard to understand. In the Middle East it is an aphrodisiac, in China a culinary delicacy. King Charles II is said to have delighted in dining on it mixed with eggs. Around the world it has been a rare and precious substance, a medicine and, most of all, a component of musky perfumes.

You’d never think it started as whale faeces, and smelling like it too. As Herman Melville said in that compendium of all things cetacean Moby Dick, it is ironic that “fine ladies and gentlemen should regale themselves with an essence found in the inglorious bowels of a sick whale”.

But vats of genetically modified bacteria could one day be producing the expensive chemical craved by the perfume industry for woody, ambergris-like scents, if research reported by biochemists at the Swiss fragrance and flavourings company Firmenich in Geneva comes to fruition. Their results are another demonstration that rare and valuable complex chemicals, including drugs and fuels, can be produced by sophisticated genetic engineering methods that convert bacteria into microscopic manufacturing plants.

Made from the indigestible parts of squid eaten by sperm whales, and usually released only when the poor whale dies from a blocked and ruptured intestine and has been picked apart by the sea’s scavengers, ambergris matures as it floats in the brine from a tarry black dung to a dense, pungent grey substance with the texture of soft, waxy stone.

Because ambergris needs this period of maturation in the open air, it couldn’t be harvested from live sperm whales even in the days when hunting was sanctioned. It could be found occasionally in whale carcasses – in Moby Dick the Pequod’s crew trick a French whaler into abandoning a whale corpse so that they can capture its ambergris. But most finds are fortuitous, and large pieces of ambergris washed ashore can be worth many thousands of dollars.

The perfume industry has long accepted that it can’t rely on such a scarce, sporadic resource, and so it has found alternatives to ambergris that smell similar. One of the most successful is a chemical compound called Ambrox, devised by Firmenich’s fragrance chemists in the 1950s and featured, I am told, in Dolce & Gabbana’s perfume Light Blue. One perfume website describes it, with characteristically baffling hyperbole, as follows: “You're hit with something that smells warm, oddly mineral and sweetly inviting, yet it doesn't exactly smell like a perfumery or even culinary material. It's perfectly abstract, approximating a person's aura rather than a specific component”.

To make Ambrox, chemists start with a compound called sclareol, named after the southern European herb Salvia sclarea (Clary sage) from which it is extracted. In other words, to mimic a sperm whale’s musky ambergris, you start with an extract of sage. This is par for the course in the baffling world of human olfaction. Although in this case Ambrox has a very similar structure to the main smelly molecules in ambergris, that doesn’t always have to be so: two odorant molecules can smell almost identical while having very different molecular structures (they are all generally based on frameworks of carbon atoms linked into rings and chains). That’s true, for example, of two other ambergris-like odorants called timberol and cedramber. Equally, two molecules that are almost identical, even mirror images of one another, can have very different odours. Quite how such molecules elicit a smell when they bind to the proteins in the olfactory membrane of the nasal cavity is still not understood.

Clary sage is easier to get hold of than ambergris, but even so the herb contains only tiny amounts of sclareol, and it is laborious to extract and purify. That’s why Firmenich’s Michel Schalk and his colleagues wanted to see if they could take the sclareol-producing genes from the herb and put them in the gut bacterium Escherichia coli, the ubiquitous single-celled workhorse of the biotechnology industry whose fermentation for industrial purposes is a well-developed art.

Sclareol belongs to a class of organic compounds called terpenes, many of which are strong-smelling and are key components of the essential-oil extracts of plants. Sclareol contains two rings of six carbon atoms each, formed when enzymes called diterpene synthases stitch together parts of a long chain of carbon atoms. The Firmenich researchers show that the formation of sclareol is catalysed in two successive steps by two different enzymes.

Schalk and colleagues extracted and identified the genes that encode these enzymes, and transplanted them into E. coli. That alone, however, doesn’t necessarily make the bacteria capable of producing lots of sclareol. For one thing, it has to be able also to make the long-chained starting compound, which can be achieved by adding yet another gene from a different species of bacteria that happens to produce the stuff naturally.

More challengingly, all of the enzymes have to work in synch, which means giving them genetic switches to regulate their activity. This approach – making sure that the components of a genetic circuit work together like the parts of a machine to produce the desired chemical product – is known as metabolic engineering. This is one level up from genetic engineering, tailoring microorganisms to carry out much more demanding tasks than those possible by simply adding a single gene. It has already been used for bacterial production of other important natural compounds, such as the anti-malarial drug artemisinin.

With this approach, the Firmenich team was able to create an E. coli strain that could turn cheap, abundant glycerol into significant quantities (about 1.5 grams per litre) of sclareol. So far this has just been done at a small scale in the lab. If it can be scaled up, you might get to smell expensively musky without the expense. Or at least, you would if price did not, in the perfume business, stand for an awful lot more than mere production costs.

Reference: M. Schalk et al., Journal of the American Chemical Society doi:10.1021/ja307404u (2012).

Saturday, November 17, 2012

Pseudohistory of science

I have just seen that my article for Aeon, the new online “magazine of ideas and culture”, has been live for some time. This magazine seems a very interesting venture; I hope it thrives. My article changed rather little in editing and is freely available, so I’ll just give the link. All that was lost was some examples at the beginning of scientists being rude about other disciplines: Richard Dawkins suggesting that theology is not an academic discipline at all, and Stephen Hawking saying that philosophy is dead (never have I seen such profundity being attributed to a boy poking his tongue out).

Tuesday, November 13, 2012

Why dissonance strikes a wrong chord in the brain

Here’s the pre-edited version of my latest news story for Nature. There is a lot more one might say about this, in terms of what it does or doesn’t say about our preferences for consonance/dissonance. At face value, the work could be interpreted as implying that there is something ‘natural’ about a preference for consonance. But the authors say that the issue of innateness simply isn’t addressed here, and they suspect learning plays a big role. After all, it seems that children don’t express preferences for consonant chords until the ages of 8-9 (earlier if they have musical training). The experiments which report such preferences in babies remain controversial.

Besides, one would need to test such things in non-Western contexts. McDermott agrees with Trehub’s comments below, saying “It is true that intervals that are consonant to Western ears are prevalent in some other cultures, but there are also instances where conventionally dissonant intervals common (e.g. in some Eastern European folk music; moreoever, major seconds are fairly common in harmony all over the world). So I think the jury is out as of now. There really is a need for more cross-cultural work.”

And the other big question is how much these preferences are modified when the intervals are encountered in a real musical context. McDermott says this: “We measured responses to chords in isolation, but that is obviously not the whole story. Context can clearly shape the way a chord is evaluated, and whether that can be linked to acoustic phenomena remains to be seen. That is a really interesting issue to look at in the future.” Trehub says that context “makes a HUGE difference. The so-called dissonant intervals don't sound dissonant in musical contexts. They generate a sense of motion or tension, creating expectations that something else will follow, and it invariably does. Musical pieces that are considered consonant have their share of dissonant intervals, which create interest, excitement, expectations, and more.”

_____________________________________________________________________

A common aversion to clashing harmonies may not stem from their grating ‘roughness’

Many people dislike the clashing dissonances of modernist composers such as Arnold Schoenberg. But what’s our problem with dissonance? It’s long been thought that dissonant musical chords contain acoustic frequencies that interfere with one another to set our nerves on edge. A new study proposes that in fact we prefer consonant chords for a different reason, connected to the mathematical relationship between the many different frequencies that make up the sound.

Cognitive neuroscientists Josh McDermott of New York University and Marion Cousineau and Isabelle Peretz of the University of Montreal have evaluated these explanations for preferences about consonance and dissonance by comparing the responses of a normal-hearing control group to those of people who suffer from amusia, an inability to distinguish between different musical tones.

In a paper in the Proceedings of the National Academy of Sciences USA [1] they report that, while both groups had an aversion to the ‘roughness’ – a kind of grating sound – that is created by interference of two acoustic tones differing only slightly in frequency, the amusic subjects had no consistent preferences for any interval (two notes played together a certain distance apart on the keyboard) over any other.

Consonant chords are, roughly speaking, made up of notes that ‘sound good’ together, for example like middle C and the G above it (an interval called a fifth). Dissonant chords are combinations that sound jarring, like middle C and the C sharp above (a minor second). The reason why we should like one but not the other has long vexed both musicians and cognitive scientists.

Consonance and dissonance in music have always excited passions, in more ways than one. For one thing, composers use dissonant chords to introduce tension, which may then be relieved by consonant chords, eliciting emotional responses from music.

It has often been suggested that humans (and perhaps some other animals) have innate preferences for consonance over dissonance, so that music in which dissonance features prominently is violating a natural law and bound to sound bad. Others, including Schoenberg himself, have argued that dissonance is merely a matter of convention, and that we can learn to love it.

The question of whether an aversion to dissonance is innate or learnt has been extensively studied, but remains unanswered. Some have claimed that very young infants prefer consonance, but even then learning can’t be ruled out given that babies can hear in the womb.

However, there has long been thought to be a physiological reason why at least some kinds of dissonance sound jarring. Two tones close in frequency interfere to produce a phenomenon called beating: what we hear is just a single tone rising and falling in loudness. The greater the frequency difference, the faster the beating, and within a certain difference range it becomes a kind of rattle, called acoustic roughness, which sounds unpleasant.

Evaluating the role of roughness in musical dissonance is complicated by the fact that real tones made by instruments or voices contain many overtones – frequencies that are whole-number multiples of the basic frequency – so that there are many frequency relationships to take into account. All the same, an aversion to beating has seemed consistent with the common dislike of intervals such as minor seconds.

Yet when McDermott and colleagues asked amusic subjects to rate the pleasantness of a whole series of intervals, their responses varied enormously both from person to person and from test to test, such that on average they showed no distinctions between any of the intervals. In contrast, normal-hearing control subjects rated small intervals (minor seconds and major seconds, such as C-D) and large but sub-octave intervals (minor sevenths C-B flat and major sevenths C-B) much lower than the others.

That wasn’t so unexpected – although the near-equal preferences of the control group for mid-span intervals seems odd to Sandra Trehub, an auditory psychologist at the University of Toronto at Mississauga. “The findings from controls don't replicate the usual pattern of preferences”, she says – where, for example, there tends to be a strong preference for octaves and fifths, and an aversion to the tritone (6 semitones, such as C-F sharp). “Hearing impairment, resulting from the need to have age-matched controls, could have influenced the control ratings somewhat”, McDermott admits.

Then the researchers tested how both groups felt about roughness. They found that the amusics could hear this and disliked it about as much as the control groups. So apparently something else was causing the latter to dislike the dissonant intervals.

These preferences seem instead to stem from the so-called harmonicity of consonant intervals. The relationship between overtone frequencies in these intervals is similar to that between the overtones in a single note: they are whole-number multiples. In contrast, the overtones for dissonant intervals don’t have that relationship, but looks more like the overtones for sounds that are ‘inharmonic’, such as the notes made by striking metal.

The control group preferred consonant intervals with these harmonic relationships over artificial ones in which the overtones were subtly shifted to be inharmonic even while the basic tones remained the same. The amusics, meanwhile, registered no difference between the two cases: they seem insensitive to harmonicity.

McDermott and his coworkers have reported previously that harmonicity seems more important than roughness for dissonance aversion in normal hearers [2]. They argue that the lack of sensitivity both to harmonicity and dissonance in amusics now adds to that case.

But Trehub is not so sure. “Most amusics don't like, or are indifferent to, music”, she says, “so it strikes me as odd to examine this population as a way of understanding the basis of consonance and dissonance.”

Peretz, however, points out that amusia doesn’t necessarily rule out musical appreciation. “A few amusics listen a lot to music”, she says.

Diana Deutsch, a music psychologist at the University of California at San Diego, says that the work is “of potential interest for the study of amusia”, but questions whether it adds much to our understanding of normal hearing. In particular she wonders if many of the findings will survive in the context of everyday music listening, where people seem to display contrary preferences. “Rock bands often deliberately introduce roughness and dissonance into their sounds, much to the delight of their audiences”, she says. “And many composers of contemporary Western art music would disagree strongly with the statement that consonant intervals and harmonic tone complexes are more pleasing in general than are dissonant intervals and inharmonic tones.”

Trehub agrees, saying that there are plenty of musical traditions in which both roughness and dissonance are appreciated. “Indonesian gamelan instruments are designed to generate roughness when played together, and that quality is considered appealing. Some folk-singing in rural Croatia and Bosnia-Herzegovina involves two people singing the same melodic line one semitone apart. Then there's jazz, with lots of dissonance. It's hard to imagine a folk tradition based on something that’s inherently negative,” she says.

But McDermott says the results do not necessarily imply that there is anything innate about a preference for harmonicity, and indeed he suspect that learning plays a role. “The amusic subjects likely had less exposure to music than did the control subjects, and this could in principle contribute to some of their deficits”, he says. “So other approaches will be needed to address the innateness issue,” he says.

References
1. Cousineau, M., McDermott, J. H. & Peretz, I. Proc. Natl Acad. Sci. USA doi:10.1073/pnas.1207989109 (2012).
2. McDermott, J. H., Lehr, A. J. & Oxenham, A. J. Curr. Biol. 20, 1035-1041 (2010).

Wednesday, November 07, 2012

Hunting number 113

Here’s the pre-edited form of an article on element 113 that appeared in the October issue of Prospect.

_________________________________________________________________

The periodic table of the elements just got a new member. At least, maybe it did – it’s hard to tell. Having run out of new elements to discover, scientists have over the past several decades been making ‘synthetic’ atoms too bloated to exist in nature. But this is increasingly difficult as the atoms get bigger, and the new element recently claimed by a Japanese group – currently known simply as element 113, its serial order in the periodic table – is frustratingly elusive. These artificial elements are made and detected literally an atom at a time, and the researchers claim only to have made three atoms in total of element 113, all of which undergo radioactive decay almost instantly.

That, and competition from teams in the United States and Russia, makes the claim controversial. The first group to sight a new element enjoys the privilege of naming it, an added spur to the desire to be first. Just as in the golden years of natural-element discovery in the nineteenth century, element-naming tends to be nationalistic and chauvinistic. No one could begrudge Marie and Pierre Curie their polonium, the element they discovered in 1989 after painstakingly sifting tonnes of uranium ore, which they named after Marie’s homeland. But the recent naming of element 114 ‘flerovium’ – after the founder of the Russian institute where it was made – and element 116 ‘livermorium’, after the Lawrence Livermore National Laboratory where it originated, display rather more concern for bragging than for euphony.

Perhaps this is inevitable, given that making new elements began in an atmosphere of torrid, even lethal, international confrontation. The first element heavier than uranium (element number 92, which is where the ‘natural’ periodic table stops) was identified in 1940 at the University of California at Berkeley. This was element 93, christened neptunium by analogy with uranium’s naming after the planet Uranus in 1789. It quickly decays into the next post-uranium element, number 94, the discovery of which was kept secret during wartime. By the time it was announced in 1946, enough had been made to obliterate a city: this was plutonium, the explosive of the Nagasaki atom bomb. The ensuing Cold War race to make new elements was thus much more than a matter of scientific priority.

To make new elements, extra nuclear particles – protons and neutrons – have to be crammed into an already replete nucleus. The sequential numbering of the elements, starting from hydrogen (element 1), is more than just a ranking: this so-called atomic number indicates how many protons there are in the nuclei of the element’s atoms. Differences in proton count are precisely what distinguish one element from another. All elements bar hydrogen also contain neutrons in their nuclei, which bind the protons together. There’s no unique number of neutrons for a given element: different neutron totals correspond to different isotopes of the element, which are all but chemically indistinguishable. If a nucleus has too few or too many neutrons, it is prone to radioactive decay, as is the case for example for carbon-14 (six protons, eight neutrons), which provides the basis for radiocarbon dating.

By element 92 (uranium), the nuclei are so swollen with particles that no isotopes can forestall decay. All the same, that process can be very slow: the most common isotope of uranium, uranium-238, has a half-life of about 4.5 billion years, so there’s plenty of it still lying around as uranium ore. Making nuclei more massive than uranium’s involves firing elementary particles at heavy atoms in the hope that some will temporarily stick. That was how Emilio Segrè and Edwin McMillan first made neptunium at Berkeley in 1939, by firing neutrons into uranium. (In the nucleus a neuron can split into a proton, raising the atomic number by 1, and an electron, which is spat out.) McMillan didn’t realise what he’d done until the following year, when chemist Philip Abelson helped him to separate the new element from the debris.

During the Second World War, both the Allies and the German physicists realised that an atomic bomb could be made from artificial elements 93 or 94, created by neutron bombardment of uranium inside a nuclear reactor. Only the Americans managed it, of course. The Soviet efforts in this direction began at the same time, thanks largely to the work of Georg Flerov. In 1957 he was appointed head of the Laboratory of Nuclear Reactions, a part of the Joint Institute of Nuclear Research in Dubna, north of Moscow. Dubna has been at the forefront of element-making ever since; in 1967 the lab claimed to have made element 105, now called dubnium.

That claim exemplifies the ill-tempered history of artificial elements. It was disputed by the rival team at Berkeley, who made 105 in the same year and argued furiously over naming rights. The Soviets wanted, magnanimously but awkwardly, to call it nielsbohrium, after Danish physicist Niels Bohr. The Americans preferred hahnium, after the German nuclear chemist Otto Hahn. Both dug in their heels until the International Union of Pure and Applied Chemistry (IUPAC), the authority on chemical nomenclature, stepped in to resolve the mess in the 1990s. Finally the Russian priority was acknowledged in the name, which after all was a small riposte to the earlier American triumphalism of americium (element 95), berkelium (element 97) and californium (98).

These ‘superheavy’ elements, with atomic numbers reaching into double figures, are generally made now not by piecemeal addition to uranium but by trying to merge together two smaller but substantial nuclei. One – typically zinc or nickel – is converted into electrically charged ions by having electrons removed, and then accelerated in an electric field to immense energy before crashing into a target made of an element like lead. This is the method used by the laboratory in Darmstadt that, since the 1980s, has outpaced both the Americans and the Russians in synthesizing new elements. Called the Institute for Heavy Ion Research (GSI), it has claimed priority for all the elements from 107 to 112, and their names reflect this: element 108 is hassium, after the state of Hesse, and element 110 is darmstadtium. But this crowing is a little less strident now: many of the recent elements have instead been named after scientists who pioneered elemental and nuclear studies: bohrium, mendelevium (after the periodic table’s discoverer Dmitri Mendeleyev), meitnerium (after Lise Meitner), rutherfordium (Ernest Rutherford). In 2010 IUPAC approved the GSI team’s proposal for element 112, copernicium, even though Copernicus is not known ever to have set foot in an (al)chemical lab.

If, then, we already have elements 114 and 116, why the fuss over 113? Although the elements get harder to make as they get bigger, the progression isn’t necessarily smooth: some combinations of protons and neutrons are (a little) easier to assemble than others. Efforts to make 113 have been underway at least since 2003, when a group at the Nishina Center for Accelerator-based Science in Saitama, near Tokyo, began firing zinc ions at bismuth. The Japanese centre, run by the governmental research organization RIKEN, was a relative newcomer to element-making, but it claimed success just a year later. It’s precisely because they are unstable that these new elements can be detected with such sensitivity: the radioactive decay of a single atom sends out particles – generally an alpha particle – that can be spotted by detectors. Each atom initiates a whole chain of decays into successive elements, and the energies and the release times of the radioactive particles are characteristic ‘fingerprints’ that allow the decay chain – and the elements within it – to be identified.

At least, that’s the theory. In practice the decay events must be spotted amidst a welter of nuclear break-ups from other radioactive elements made by the ion collisions. And with so many possible isotopes of these superheavy elements, the decay properties of which are often poorly known, there’s lots of scope for phantoms and false trails – not to mention outright fraud (Bulgarian nuclear scientist Victor Ninov, who worked at Berkeley and GSI, was found guilty of fabricating evidence for the claimed discovery of element 118 at Berkeley in 2001). When you consider the figures, some scepticism is understandable: the Japanese team estimated that only 3-6 out of every 100 quintillion (10**20) zinc ions would produce an atom of 113.

Last year, IUPAC representatives decided the Japanese results weren’t conclusive. But neither were they persuaded by subsequent claims of scientists at Dubna and Berkeley, who have begun collaborating after decades of bitter rivalry. However, on 26 September the RIKEN team released new data that make a stronger case. The team leader Kosuke Morita attests that he is “really confident” they have element 113 pinned. Again they’ve only a single decay chain to adduce – starting from a single atom of 113 – but some experts now find the case convincing. If so, it looks like the name game will get solipsistic again: rikenium and japonium are in the offing.

Given how hard it is to make this stuff, why bother? Plutonium isn’t the only artificial element to find a use: for example, minute amounts of americium are used in some smoke detectors. Yet as the superheavies get ever heavier and less stable, typically decaying in a fraction of a second, it’s harder to imagine how they could be of technological value. But according to calculations, some isotopes of element 114 and nearby elements should be especially stable, with half-lives of perhaps several days, years, even millennia. If that’s true, these superheavies could be gradually accumulated atom by atom. But some other estimates say this ‘island of stability’ won’t appear until element 126; others suspect it may not really exist at all.

There are another, more fundamental motivations for making new elements. They test to destruction the current theories of nuclear physics: it’s still not fully understood what the properties of these massive nuclei are, although they are expected to do weird things, such as take on very deformed, non-spherical shapes.

Artificial elements also pose a challenge to the periodic table itself, chemistry’s organizing scheme. It’s periodic because, as Mendeleyev and others realised, similar chemical properties keep reappearing as the elements’ atomic numbers increase: the halogens chlorine (element 17), bromine (35) and iodine (53) all form the same kinds of chemical compounds, for example. That’s because atoms’ electrons – negatively charged particles that govern chemical behaviour – are arranged in successive shells, and the arrangements for elements in the same column of the periodic table are analogous: all the halogens are one electron short of a filled outermost shell.

But a very massive nucleus starts to undermine this tidy progression of electron shells. The electrons closest to the nucleus feel the very strong electric field of that mass of protons, which makes them very energetic – they circulate around the nucleus at speeds approaching the speed of light. Then they feel the effects of special relativity: as Einstein predicted, particles moving that fast gain mass. This alters the electrons’ energies, with knock-on effects in the outer shells, so that the outermost electrons that determine the atom’s chemical behaviour don’t observe the periodic sequence. The periodic table then loses its rhythm, as such elements deviate from the properties of those with which it shares a column – it might form a different number of chemical bonds, say. Some anomalous properties of natural heavy elements are caused by these “relativistic” effects. They alter the electron energies in gold so that it absorbs blue light, accounting for the yellow tint of the light it reflects. And they weaken the chemical bonds between mercury atoms, giving the metal its low melting point.

Relativistic deviancy is expected for at least some superheavies. To look for it, researchers have to accomplish extraordinarily adroit chemistry: to figure out from just a handful of atoms, each surviving for perhaps seconds to minutes, how the element reacts with others. This could, for example, mean examining whether a particular chemical compound is unusually volatile or insoluble. The teams at GSI, Dubna and Berkeley have perfected methods of highly sensitive, quick-fire chemical analysis to separate, purify and detect their precious few exotic atoms. That’s enabled them to establish that rutherfordium (element 104) and dubnium buck the trends of the periodic table, whereas seaborgium (106) does not.

As they enter the artificial depths of the periodic table, none of these researchers knows what they will find. The Dubna group claims to have been making element 115 since 2003, but IUPAC has not yet validated the discovery. They are on firmer grounds with 117 and 118, which are yet to be named, and both GSI and the RIKEN team are now hunting 119 and 120.

Is there any limit to it? Richard Feynman once made a back-of-the-envelope calculation showing that nuclei can no longer hold onto electrons beyond an atomic number of 137. More detailed studies, however, shows that to be untrue, and some nuclear scientists are confident there is no theoretical limit on nuclear size. Perhaps the question is whether we can think up enough names for them all.

Tuesday, November 06, 2012

Who's bored af?

Here’s my latest piece for BBC Future. This version contains rather more rude words than the one eventually published – the BBC is perhaps surprisingly decorous in this respect (or maybe they figure that their science readers aren’t used to seeing the sort of language that arty types throw around all the time).

My editor Simon Frantz pointed out this other example of how Twitter is being used for linguistic/demographic analysis, in this case to map the distribution of languages in London. I love the bit about the unexpected prevalence of the Tagalog language of the Philippines – because it turns out to contain constructions such as “lolololol” and “hahahahaha”. I hope that in Tagalog these convey thoughts profounder than those of teenage tweeters.

_____________________________________________________________________

This piece contains strong language from the beginning, as they say on the BBC. But only in the name of science – for a new study of how slang expressions spread on Twitter professes to offer insights into a more general question in linguistics: how innovation in language use occurs.

You might, like me, have been entirely innocent of what ‘af’ denotes in the Twittersphere, in which case the phrase “I’m bored af” would simply baffle you. It doesn’t, of course, take much thought to realise that it’s simply an abbreviation for “as fuck”. What’s less obvious is why this pithy abbreviation should, as computer scientist Jacob Eisenstein of the Georgia Institute of Technology in Atlanta and his coworkers Brendan O’Connor, Noah Smith and Eric Xing of Carnegie Mellon University in Pittsburgh report in a preprint as yet unpublished, have jumped from its origin in southern California to a cluster of cities around Atlanta before spreading more widely across the east and west US coasts.

Other neologisms have different life stories. Spelling bro, slang for brother (male friend or peer) as bruh began in cities of the southeastern US (where it reflects the local pronunciation) before finally jumping to southern California. The emoticon “-__-“ (denoting mild discontent) began in New York and Florida before colonizing both coasts and gradually reaching Arizona and Texas.

Who cares? Well, the question of how language changes and evolves has occupied linguistic anthropologists for several decades. What determines whether an innovation will propagate throughout a culture, remain just a local variant, or be stillborn? Such questions decide the grain and texture of all our languages – why we might tweet “I’m bored af” rather than “I’m bored, forsooth”.

There are plenty of ideas about how this happens. One suggestion was that innovations spread by simple diffusion from person to person, like a spreading ink blot. Another idea is that bigger population centres exert a stronger attraction on neologisms, so that they go first to large cities by a kind of gravitational pull. Or maybe culture and demography matters more than geographical proximity: words might spread initially within some minority groups while being invisible to the majority.

It’s now possible to devise rather sophisticated computer models of interacting ‘agents’ to examine these processes. They tell us little, however, unless there are real data to compare them against. Whereas once such data were extremely difficult to obtain, now social media provide an embarrassment of riches. Eisenstein and colleagues collected messages from the public feed on Twitter, which collects about 10 percent of all public posts. They collected around 40 million messages from around 400,000 individuals between June 2009 and May 2011 that could be tied to a particular geographical location in the USA because of the smartphone metadata optionally included with the message.

The researchers then assigned these to the respective “Metropolitan Statistical Areas” (MSAs): urban centres that typically representing a single city. For each MSA, demographic data on ethnicity are available which, with some effort to correct for the fact that Twitter users are not necessarily representative of the area’s overall population, allows a rough estimate of what the ethnic makeup of the messagers is.

Eisenstein and colleagues want to work out how these urban centres influence each other – to tease out the network across which linguistic innovation spreads. This is a challenging statistical problem, since they must distinguish between coincidences in word use in different locations that could arise just by chance. There is, it must be said, a slightly surreal aspect in the application of complex statistical methods to the use of the shorthand ctfu (“cracking the fuck up”) – but after all, expletive and profanity have always offered one of the richest and inventive examples of language evolution.

The result is a map of the USA showing the influence networks of many of the major urban centres: not just how they are linked, but what the direction of that influence is. What, then, are the characteristics that make an MSA likely to spawn successful neologisms? Eisenstein and colleagues have previously found that Twitter has a higher rate of adoption among African Americans than other ethnic groups, and so it perhaps isn’t surprising that they now find that innovation centres, as well as being highly populated, have a higher proportion of African Americans, and that similarity of racial demographic can make two urban centres more likely to be linked in the influence network. There is a long history of adoption of African American slang (cool, dig, rip off) in mainstream US culture, so these findings too accord with what we’d expect.

These are still early days, and the researchers – who hope to present their preliminary findings at a workshop on Social Network and Social Media Analysis in December organized by the Neural Information Processing Systems Foundation – anticipate that they will eventually be able to identify more nuances of influence in the data. The real point at this stage is the method. Twitter and other social media offer records of language mutating in real time and space: an immense and novel resource that, while no doubt subject to its own unique quirks, can offer linguists the opportunity to explore how our words and phrases arise from acts of tacit cultural negotiation.

Paper: J. Eisenstein et al. preprint http://www.arxiv.org/abs/1210.5268.

Tuesday, October 30, 2012

Atheists and monsters

I have a letter in New Humanist responding to Francis Spufford’s recent defence of his Christian belief, a brief resume of the case he lays out in his new book. The letter was truncated to the second paragraph, my first and main point having been made in the preceding letter from Leo Pilkington. Here it is anyway.

And while I’m here: I have some small contributions in a nice documentary on Channel 4 tomorrow about Mary Shelley’s Frankenstein. I strolled past Boris Karloff’s blue plaque today, as I often do, erected on the wall above my local chippy. He was a Peckham Rye boy named William Henry Pratt. Happy Halloween.

______________________________________________________________

Since I’m the sort of atheist who believes that we can and should get on with religious folk, and because I have such high regard for Francis Spufford, I am in what I suspect is the minority of your readers in agreeing with the essence and much of the substance of what he says. It’s a shame, though, that he slightly spoils his case by repeating the spurious suggestion that theists and atheists are mirror images because of their yes/no belief in God. The null position for a proposition that an arbitrary entity exists for which there is no objective evidence or requirement and no obvious way of testing is not to shrug and say “well, I guess we just don’t know either way.” We are back to Russell’s teapot orbiting the Sun. The reason why the teapot argument won’t wash for religious belief is, as Spufford rightly says, because a belief in God is about so many other feelings, values and notions (including doubt and uncertainty), not ‘merely’ the issue of whether one can make the case objectively. While this subjectivity throws the likes of Sam Harris into paroxysms, it’s a part of human experience that we have to deal with.

Spufford is also a little too glib in dismissing the anger that religion arouses. The Guardian’s Comment is Free is a bad example, being a pathological little ecosystem to itself. Some of that anger stems from religious abuses to human rights and welfare, interference in public life, denial of scientific evidence, and oppression, conformity and censure. All of these malaises will, I am sure, be as deplored by Spufford as they are by non-believers. When religions show themselves capable of putting their own houses in order, it becomes so much easier for atheists to acknowledge (as we should) the good that they can also offer to believers and non-believers alike.

Thursday, October 25, 2012

Balazs Gyorffy (1938-2012)


I just heard that the solid-state physicist Balazs Gyorffy, an emeritus professor at Bristol, has died from cancer after a short illness. Balazs was a pioneer of first-principles calculations of electronic structure in alloys, and contributed to the theory of superconductivity in metals. But beyond his considerable scientific achievements, Balazs was an inspirational person, whose energy and passion made you imagine he would be immortal. He was a former Olympic swimmer, and was apparently swimming right up until his illness made it impossible. He was interested in everything, and was a wonderfully generous and supportive man. His attempts to teach me about Green’s functions while I was at Bristol never really succeeded, but he was extremely kind with his time and advice on Hungary when I was writing my novel The Sun and Moon Corrupted. Balazs was a refugee from the 1956 Hungarian uprising, and was an external member of the Hungarian Academy of Sciences. He was truly a unique man, and I shall be among the many others who will miss him greatly.

An old look at Milan


I have no reason for posting this old photo of Milan Cathedral except that I found it among a batch of old postcards (though the photo's an original) and think it is fabulous. I tend to like my cathedrals more minimalist, but this one is fabulously over the top.

Why cancer is smart

This is my most recent piece on BBC Future, though another goes up tomorrow.

_______________________________________________________________

Cancer is usually presented as a problem of cells becoming mindless replicators, proliferating without purpose or restraint. But that underestimates the foe, according to a new paper, whose authors argue that we’ll stand a better chance of combating it if we recognize that cancer cells are a lot smarter and operate as a cooperating community.

One of the authors, physicist Eshel Ben-Jacob of Tel Aviv University in Israel, has argued for some time that many single-celled organisms, whether they are tumour cells or gut bacteria, show a rudimentary form of social intelligence – an ability to act collectively in ways that adapt to the prevailing conditions, learn from experience and solve problems, all with the ‘aim’ of improving their chances of survival. He even believes there is evidence that they can modify their own genomes in beneficial ways.

Some of these ideas are controversial, but others are undeniable. One of the classic examples of a single-celled cooperator, the soil-dwelling slime mold Dictyostelium discoideum, survives a lack of warmth or moisture by communicating from cell to cell and coordinating their behaviour. Some cells send out pulses of a chemical attractant which diffuse into the environment and trigger other cells to move towards them. The community of cells then forms into complex patterns, eventually clumping together into multicelled bodies that look like weird mushrooms. Some of these cells become spores, entering into a kind of suspended animation until conditions improve.

Many bacteria can engage in similar feats of communication and coordination, which can produce complex colony shapes such as vortex-like circulating blobs or exotic branching patterns. These displays of ‘social intelligence’ help the colonies survive adversity, sometimes to our cost. Biofilms, for example – robust, slimy surface coatings that harbour bacteria and can spread infection in hospitals – are manufactured through the cooperation of several different species.

But the same social intelligence that helps bacteria thrive can be manipulated to attack pathogenic varieties. As cyberwarfare experts know, disrupting communications can be deadly. Some strategies for protecting against dangerous bacteria now target their cell-to-cell communications, for example by introducing false signals that might induce cells to eat one another or to dissolve biofilms. So it pays to know what they’re saying to one another.

Ben-Jacob, along with Donald Coffey of the Johns Hopkins University School of Medicine in Baltimore and ‘biological physicist’ Herbert Levine of Rice University in Houston, Texas, think that we should be approaching cancer therapy this way too: not by aiming to kill off tumour cells with lethal doses of poisons or radiation, but by interrupting their conversations.

There are several indications that cancer cells thrive by cooperating. One trick that bacteria use for invading new territory, including other organisms, is to use a mode of cell-to-cell communication called quorum sensing to determine how densely populated their colony is: above a certain threshold, they might have sufficient strength in numbers to form biofilms or infect a host. Researchers have suggested that this process is similar to the way cancer cells spread during metastasis. Others think that group behaviour of cancer cells might explain why they can become so quickly resistant to drugs.

Cancer cells are very different from bacteria: they are rogue human cells, so-called eukaryotic cells which have a separate compartment for the genetic material and are generally deemed a more advanced type of cell than ‘primitive’ bacteria, in which the chromosomes are just mixed up with everything else. Yet it’s been suggested that, when our cells turn cancerous and the normal processes regulating their growth break down, more primitive ‘single-celled’ styles of behaviour are unleashed.

Primitive perhaps – but still terrifyingly smart. Tumours can trick the body into making new blood vessels to nourish them. They can enslave healthy cells and turn them into decoys to evade the immune system. They seem even able to fool the immune system into helping the cancer to develop. It’s still not clear exactly how they do some of these things. The anthropomorphism that makes cancer cells evil enemies to be ‘fought’ risks distorting the challenge, but it’s not hard to see why researchers succumb to it.

Cancer cells resistant to drugs can and do emerge at random by natural selection in a population. But they may also have tricks that speed up mutation and boost the chances of resistant strains appearing. And they seem able to generate dormant, spore-like forms, as Dictyostelium discoideum and some bacteria do, that produce ‘time-bomb’ relapses even after cancer traces have disappeared in scans and blood tests.

So what’s to be done? Ben-Jacob and colleagues say that if we can crack the code of how cancer cells communicate, we might be able to subvert it. These cells seem to exchange chemical signals, including short strands of the nucleic acid RNA which is known to control genes. They can even genetically modify and reprogramme healthy cells by dispatching segments of DNA. The researchers think that it might be possible to turn this crosstalk of tumour cells against them, inducing the cells to die or split apart spontaneously.

Meanwhile, if we can figure out what triggers the ‘awakening’ of dormant cancer cells, they might be tricked into revealing themselves at the wrong time, after the immune system has been boosted to destroy them in their vulnerable, newly aroused state. Ben-Jacob and colleagues suggest experiments that could probe how this switch from dormant to active cells comes about. Beyond this, perhaps we might commandeer harmless or even indigenous bacteria to act as spies and agent provocateurs, using their proven smartness to outwit and undermine that of cancer cells.

The ‘warfare’ analogy in cancer treatment is widely overplayed and potentially misleading, but in this case it has some value. It is often said that the nature of war has changed over the past several decades: it’s no longer about armies, superior firepower, and battlefield strategy, but about grappling with a more diffuse foe – indeed one loosely organized into ‘cells’ – by identifying and undermining channels of recruitment, communication and interaction. If it means anything to talk of a ‘war on cancer’, then perhaps here too we need to think about warfare in this new way.

Reference: E. Ben-Jacob, D. S. Coffey & H. Levine, Trends in Microbiology 20, 403-410 (2012).

Tuesday, October 16, 2012

Sweets in Boots

Here’s a piece I just wrote for the Guardian’s Comment is Free. Except in this case it isn’t, because comments have been prematurely terminated. That may be rectified soon, if you want to join the rush.

________________________________________

In the 13th century, £164 was an awful lot of money. But that’s how much the ailing Edward I spent on making over two thousand pounds in weight of medicinal syrups. Sugar was rare, and its very sweetness was taken as evidence of its medicinal value. Our word ‘treacle’ comes from theriac, a medieval cure-all made from roasted vipers, which could prevent swellings, unblock intestinal blockages, remove skin blemishes and sores, cure fevers, heart trouble, dropsy, epilepsy and palsy, induce sleep, improve digestion, restore lost speech, convey strength and heal wounds. No wonder town authorities monitored the apothecaries who made it, to make sure they didn’t palm people off with substandard stuff.

We like a good laugh at medieval medicine, don’t we? Then we walk into the sweetie shops for grown-ups known as Boots to buy lozenges, pastilles and syrups (hmm, suspiciously olde words, now that I think about it) for our aches, coughs and sneezes. Of course, some of us consider this sugaring of the pill to be prima face evidence of duping by the drug companies, and we go instead for the bitter natural cures, the Bach remedies and alcoholic tinctures which, like the medieval syphilis cure called guaiac, are made from twigs and wood, cost the earth, and taste vile.

Each to his own. I quite like the sugar rush. And I’m not surprised that Edward I did – on a medieval diet, a spoonful of sugar would probably work wonders for your metabolism, you’d feel like a new person for a few hours until your dropsy kicked in again. This, I surmise, must be why there is Benylin in my medicine cabinet. Because surely I didn’t – did I? – buy it because I thought it would make my cough any better?

An ‘expert panel’ convened by Which? Magazine has just announced that “We spend billions on over-the-counter pharmacy products each year but we’ve found evidence of popular products making claims that our experts judged just aren’t backed by sufficient evidence.” Cough syrups are among the worst offenders. They sell like crazy in winter, are mostly sugar (including treacle), and probably do sod all, despite the surreally euphemistic claims of brands such as Benylin that they will make your cough “more productive”.

Let’s be fair – Boots, at least, never claimed otherwise. Its “Web MB” admits that “The NHS says there’s not much scientific evidence that cough medicines work… The NHS says there are no shortcuts with coughs caused by viral infections. It just takes time for your body to fight off the infection.” Sure, if the syrup contains paracetamol, it might ease your aching head; if there’s any antihistamine in there, your streaming nose and eyes might dry up a bit. But if you want to soothe your throat, honey and lemon is at least as good – the Guardian’s told you that already.

The Which? report also questioned evidence that Seven Seas Jointcare tablets, Adios Herbal Slimming Tablets and Bach Rescue Remedy spray (to “restore inner calm”) do any good. Are you shocked yet?

Consumers deserve protection against charlatans, for sure. But as far as the over-the-shelf pharmacy counter is concerned, you might as well be expecting scientific evidence for palm reading. Can we, in this post-Ben Goldacre age, now ditch the simplistic view that medicine is about the evidence-based products of the pharmaceutical industry versus the crystal healers? That modern conceit ignores the entire history of medicine, in which folk belief, our wish for magical remedies, placebos, diet, fraud, abuse of authority, and the pressures of commerce have always played at least as big a role as anything resembling science. Modern drugs have made life longer and more bearable, but drug companies are no more above fixing the ‘evidence’ than some alternative cures are above ignoring it.

We’re right to be outraged at Big Pharma misbehaving, especially when their evasions and elisions concern drugs with potentially serious side-effects. But the sniffles and coughs that send us grazing in Boots are the little slings and arrows of life, and all we’re doing there is indulging in some pharmacological comfort eating. I’m a fan of analgesics, and my summers are made bearable by antihistamines, but a lot of the rest is merely lifestyle-targeted placebo. There’s no harm in that, but if we are going to be affronted when we find that those saccharine pills and potions won’t cure us, we’ve misunderstood the nature of the transaction.

The nobel art of matchmaking

I have a Nature news story on the economics Nobel prize. Here’s the pre-edited version.

________________________________________________

Two economists are rewarded for the theory and application of how to design markets for money-free transactions

The theory and practice of matching resources to those who need them, in cases where conventional market forces cannot determine the outcome, has won the Nobel prize in economics for Lloyd Shapley of the University of California at Los Angeles and Alvin Roth of Harvard University.

Their work on matching “has applications everywhere”, says economist Atila Abdulkadiroglu of Duke University in Durham, North Carolina. “Shapley's work laid the groundwork, and Roth's work brought the theory to life.”

“This is terrific prize to a pair of very deserving scholars”, says economist Paul Milgrom of Stanford University in California.

The work of Shapley and Roth shows how to find optimal matches between people or institutions ‘trading’ in commodities that money can’t buy: how to allocate students to schools or universities, say, or to match organ donors to patients.

Universities can’t determine which students enrol simply by setting their fees arbitrarily high, since these are capped. And payments for organ donation are generally prohibited on ethical grounds. In such situations, how can one find matches that are stable, in the sense that no one considers they can do better by seeking a different match?

In the 1960s Shapley and his coworker David Gale analysed the most familiar match-making problem: marriage. They asked how ten men and ten women could be matched such that none would see any benefit in breaking the partnership to make a better match.

The answer was to let one group (men or women) choose their preferred partner, and then let those who were rejected by their first choice make their second-best selection. This process continues until none of the choosers wishes to make another proposal, whereupon the group holding the proposals finally accepts them.

Shapley and Gale (who died in 2008) proved that this process will always lead to stable matching [1]. They also found, however, that it works to the advantage of the choosers – that is, those who make the proposals do better than those receiving them.

“Without the framework Shapley and Gale introduced, we would not be able to think about these problems in sound theoretical terms”, says Abdulkadiroglu.

However, their work was considered little more than a neat academic result until, about 20 years later, Roth saw that it could be applied to situations in the real world. He found that the US National Resident Matching Program, a clearing house for allocating medical graduates to hospitals, used an algorithm similar to Shapley and Gale’s, which prevented problems caused by the fact that hospitals might need to offer students internships before they even knew which area they were going to specialize in [2].

But he discovered that the same problem in the UK was addressed with quite different matching algorithms in different regions, some of which were stable and some not [3]. His work persuaded local health authorities to abandon inefficient, unstable practices.

Roth also helped to tailor such matching strategies to specific market conditions – for example, to adapt the allocation of students to hospitals to the constraint that, as more women graduated, students might often be looking for places as a couple. And he showed how to make these matching schemes immune to manipulation by either party in the transaction.

Roth and his coworkers also applied the Gale-Shapley algorithm to the allocation of pupils among schools. “He directly employs the theory in real-life problems”, says Abdulkadiroglu. “This is not a trivial task. Life brings in complications and institutional constraints that are difficult to imagine or study within the abstract world of theory.”

Shapley extended his analysis to cases where one of the parties in the transaction is passive, expressing no preferences – for example, in the allocation of student rooms. David Gale devised a scheme for finding a stable allocation called ‘top trading’, in which agents are given one object each but can swap them for their preferred choice. Satisfied swappers leave the market, and the others continue the swapping until everything has been allocated. In 1974 Shapley and Herbert Scarf showed that this process always led to stable solutions [4]. Roth has subsequently used this approach to match patients with organ donors.

All of these situations are examples of so-called cooperative game theory, in which the agents seek to align the choices, make matches and coalitions – as opposed to the more familiar non-cooperative game theory that won Nobels for John Nash (1994), Thomas Schelling (2005) and others, in which agents act independently. “In my view, Shapley has made more than one prize-worthy contribution to game theory”, says Milgrom, “but his work on matching has the greatest economic significance.”

With economic theory signally failing to bring order and stability to the world’s financial markets, it’s notable that the Nobel committee has chosen to reward work that offers practical solutions in ‘markets’ in which money is of little consequence. The work of Shapley and Roth shows that there is room for economic theory outside the ruthless cut-and-thrust of money markets – and perhaps, indeed, that in a more cooperative world it can be more effective.

References
1. Gale, D. & Shapley, L. S. American Mathematical Monthly 69, 9-15 (1962).
2. Roth, A. E. Journal of Political Economy 92, 991-1016 (1984).
3. Roth, A. E. American Economic Review 81, 415-40 (1991).
4. Shapley, L.S. & Scarf, H. Journal of Mathematical Economics 1, 23-37 (1974).