Colour MRI, Agent Prion, Testing Testosterone

Martian minerals, courtesy of NASAI’ve got some wide-ranging research to report in this week’s SpectroscopyNOW, including mineral tests, colour MRI, the Agent Smith of prions, and a new approach to spotting doped athletes.

New insights offered by near infrared spectroscopy into the mineralogy of carbonate rocks could help improve the outlook for carbon capture and storage in efforts to reduce the effect of carbon dioxide emissions on the global climate. Although, personally I think the real relevance of this work will be in understanding the mineral found on Mars or other planets rather than some spurious and potentially misguided efforts to control the atmosphere.

Not everything is black and white, perhaps with the exception of MRI. Aside from the artificial colours that can be added by computer, MRI is a technique of contrasts and greyscales. However, that could all change soon thanks to the ongoing development of microscopic magnetic particles by researchers in the US who hope to bring a little colour to MRI.

Meanwhile, NMR spectroscopy (the original molecular MRI) has revealed significant difference between the infectious and non-infectious form of prions, errant proteins that replicate by converting other proteins into copies of themselves. The finding could lead to new insights into how prions cause brain diseases, such as CJD and may one day lead to a way to stop their spread.

Faster and more accurate testing of complex systems such as skin and other turbid media could soon be possible thanks to a laser boost for Raman spectroscopy. The technique has potential applications in pharmaceutical research, forensic science and security screening.

Another analytical boost comes with work being done at Argonne National Laboratory to develop a new super bright source of X-rays that are one hundred million times brighter than any currently operating laboratory source. The sources will open up new avenues in materials science such as the faster and more detailed analysis of high-temperature superconductors.

Finally, in the current specNOW issue, a new analytical approach to testing for testosterone and related steroids in body fluids could spot illicit doping of athletes at coming sports events.

Identifying Digital Gems

DOI logoSciencebase readers will likely be aware that when I cite a research paper, I usually use the DOI system, the Digital Object Identifier. This acts like a redirect service taking a unique number, which might look like this assigned to each research paper by its publisher and passing it to a server that works out where the actual paper is on the web.

The DOI system has several handlers, and indeed, that’s one of its strength: it is distributed. So, as long as you have the DOI, you can use any of the handlers (dx.doi.org, http://hdl.handle.net, http://hdl.nature.com/ etc) to look up a paper of interest, e.g. http://dx.doi.org/10.1504/IJGENVI.2008.018637 will take you to a paper on water supplies on which I reported recently.

The DOI is kind of a hard-wired redirect for the actual URL of the object itself, which at the moment will be a research paper. It could, however, be any another digital object: an astronomical photograph, a chemical structure, or a genome sequence, for instance. In fact, thinking about it, a DOI could be used as a shorthand, a barcode, if you like, for whole genomes, protein libraries, databases, molecular depositions.

I’m not entirely sure why we will also need the Library of Congress permalinks, the National Institutes of Health simplified web links, as well as the likes of PURL and all those URL shortening systems like tinyURL and snipurl. A unified approach, which perhaps worked at the point of origin, the creator of the digital object, which I’ve suggested previously and coined the term PaperID, would seem so much more straightforward.

One critical aspect of the DOI is that it ties to hard, unchanging, non-dynamic links (URLs) for any given paper, or other object. Over on the CrossTech blog, Tony Hammond raises an interesting point regarding one important difference between hard and soft links and the rank that material at the end of such a link will receive in the search engines. His post discusses DOI and related systems, such as PURL (the Persistent URL system), which also uses an intermediate resolution system to find a specific object at the end of a URL. There are other systems emerging such as OpenURL and LCCN permalinks, which seek to do something similar.

However, while Google still predominates online search, hard links will be the only way for a specific digital object to be given any weight in its results page. Dynamic or soft links are discounted, or not counted at all, and so never rank in the way that material at the end of a hard link will.

Perhaps this doesn’t matter, as those scouring the literature will have their own databases to trawl that require their own ranking algorithms based on keywords chosen. But, I worry about serendipity. What of the student taking a random walk on the web for recreation or perhaps in the hope of finding an inspirational gem? If that gem is, to mix a metaphor, a moving target behind a soft link, then it is unlikely to rank in the SERPs and may never be seen.

Perhaps I’m being naive, maybe students never surf the web in this way, looking for research papers of interest. However, with multidisciplinarity increasingly necessary in many cross-disciplines it seems unlikely that gems are going to be unearthed through conventional literature searching of a parochial database that covers a limited range of journals and other resources.

Taste Sensation

Salt crystals by wlodi, from flickrstreamI wrote about the effect of salt on the boiling point of water recently and Sciencebase reader Derek Burney asked why cooks use salt when boiling vegetables, for instance, if the effect on boiling point and so cooking times is so minimal, as I explained.

Well, the small amount of salt (sodium chloride) added to food has very, very little effect on the boiling point and so really does not affect how quickly the food cooks. The fundamental reason we like to cook with salt is that salt has not only its own taste, but also interferes with the bitter-taste receptors on the tongue, essentially blocking them temporarily and so masking the taste of any bitter compounds in the food you eat, therefore emphasising any sweet tastes. It really is purely there as a flavour enhancer. Try it with some raw lettuce, eat a leaf raw and concentrate on the bitterness. Then sprinkle on some salt and eat the second leaf, besides the taste of the salt, you will notice it actually tastes sweeter.

Given how bitter and downright nasty some vegetables can taste raw – think Brussels sprouts, spring cabbage, turnip – and perhaps more so in days gone by when quality may have been even lower, it is easy to see why adding salt to the cooking pot would have become standard practice.

This was covered in New Scientist a while back. The sodium salt of the glutamic acid, commonly known as MSG (monosodium glutamate) does even more, it has the bitter-blocking sodium ions. It adds a frissant through the stimulating “deliciousness” (umami, in Japanese, from the word for savoury) of the glutamate. Some research indicates that there are umami receptors on the tongue representing a fifth taste sensation alongside bitter, sweet, salt, and sour. Different research again adds a sixth taste sensation to our tongues, claiming a receptor for fatty acids.

Given the highly stimulating effects of salt (as taste and bitter blocker), MSG, and fat (in the form of fatty acids) on our tongues it is perhaps no surprise then that salt-laden fatty foods taste so delicious.

Leveraged Knowledge Management

LeverageSeveral years ago, I was called on by a multinational producer of hygiene, food, and cleaning products to pay a visit to their research and information centre. My role was to play editorial consultant for content for their new Intranet.

You see, the company had lots of researchers in one building who were working hard on non-stick ice cream and insect-deterring shaving gel, while the information team were in a separate building trawling patents and fishing for pertinent technology news. Unfortunately, the two teams worked a different shift system and rarely met, and even when they did meet, they didn’t have much to say to each other. One group were more concerned with the latest formulation change that might result in a shift in rheological properties for a new cheese-and-chocolate pie filling or a non-grip floor cleaner, while the other group were interested only in assessing prior art or working out Boolean strings for searching databases.

The new Intranet would, however, bring the two groups together. The system would provide regularly updated news and information about what each team was doing, forums for discussing ways to improve efficiency and, most importantly, offer them an area where ideas might be mashed up to help each see the possibilities of the other’s efforts. It would be pretty much an internal version of what we now know as online social networking. And, as with many mashups we see these days would, the reasoning went, allow the company to come up with novel products and marketing strategies based on the two sides of their knowledge.

At the heart of the solution was the concept of knowledge management. Indeed, the person in charge had KM somewhere in his job title and, unfortunately, the word leverage was also in his job description. [This post’s title is meant to be ironic, in case you didn’t spot it, db]

Now, before this consultancy work, I had not come across the abbreviation KM, nor the phrase knowledge management and had certainly never wittingly used the word leverage. To my mind it sounded like nothing more than market-speak. Nevertheless, I quickly worked out what it was meant to mean and undertook the task with my usual level of workaholic diligence.

Apparently, I’m not alone in my perspective on KM. But, more to the point many people are unaware of the fact that the stuff with which they work is labelled knowledge and that they can manage it nevertheless. In fact, that’s the main conclusion from a paper entitled: “Dimensions of KM: they know not it’s called knowledge…but they can manage it!” published today in the International Journal of Teaching and Case Studies recently (2008, 1, 253).

In that paper, Sonal Minocha a senior manager at Raffles University, Singapore, and George Stonehouse of the Napier University Business School, Craiglockhart Campus, in Edinburgh, UK, have explored how for business students the concept of knowledge is a fascinating one as most of them wonder what is encompassed within “knowledge management” for it to be a subject. Yet these same students, can manage knowledge almost without needing to know precisely what KM is. Perhaps that finding extends into the workplace too. Indeed, following a case study involving the fall and rise of a major vehicle manufacturer the researchers come to the following conclusion:

Competitive advantage, however, depends upon the creation of new knowledge, based upon this learning, centred upon the development of new business, new ways of doing business, improved customer and supplier relationships, and the development and leverage of new knowledge-based core competencies.

Aside from the use of the word leverage and phrases like core competencies, that seems to be a fair conclusion.

Well, by now you may be wondering what happened to that proto-social network pioneered by the manufacturer of anti-shark surfboard wax and child-repellent screen cleaners. My consultancy work lasted a week during which time, I’m not proud to say, I taught the management team a thing or two about how not to flip a flip-chart and almost blinded the team leader with a laser pointer.

At the end of the week, we had a fairly solid outline for how to proceed with the new networking system. Unfortunately, the staff member in charge of the KM project to bring the research and information people together was leveraged out of the company soon after; for reasons unrelated to the incident with the flipchart and laser pointer, I should add.

Carbon Tet and Paradigm Shifts

Since tetrachloromethane is banned as an industrial solvent avoiding its formation as a byproduct of other chlorocarbons is important, this week, The Alchemist learns that a lanthanum chloride catalyst could help with the cleanup. A paradigm shift in drug discovery could be approaching as researchers working with proteins involved in Alzheimer’s disease have discovered an apparently novel approach to inhibiting disease. In organic chemistry, the Alchemist hears that molecules are not quite as diverse as we first thought, while an Olympic analysis could help sports officials spot dopey athletes. Princeton scientists are focusing on a new approach to making microchips and, finally, an astronomer with a chemical bent has had a cometary mineral named for him.

More on this and all the links in my Alchemist column on ChemWeb

Lighting Up Genetic Disease

Image analysisGenetic disease is a complicated affair. Scientists have spent years trying to find genetic markers for diseases as diverse as asthma, arthritis and cardiovascular disease. The trouble with such complex diseases is that they are none of them simply a manifestation of a genetic issue. They involve multiple genes, various other factors within the body and, of course, environmental factors outside the body.

There are some genetic diseases, however, that are apparently caused by nothing more than a single mutation in the human genome. A single DNA base out of the many thousands on a person’s genetic material, if swapped for another the wrong base means the production of the protein associated with that gene goes awry.

For example, sickle cell disease is caused by a point mutation in the gene for beta-haemoglobin. The mutation causes the amino acid valine to be used in place of glutamic acid at one position in the haemoglobin. This faulty protein cannot fold into its perfect active form, which in turn leads to a cascade of effects, that result ultimately in faulty red blood cells and their associated health problems for sufferers. There are many other diseases associated with single point mutations, including achondroplasia, characterised by dwarfism, in one sense, cystic fibrosis, although different single mutations may be involved, and hereditary hemochromatosis, an iron overload disease.

“To be able to study and diagnose such diseases with limited material from patients, there is a need for methods to detect point mutations in situ,” explains Carolina Wählby of the Department of Genetics and Pat Centre for Image Analysis, at Uppsala University, Sweden, and her colleagues Patrick Karlsson, Sara Henriksson, Chatarina Larsson, Mats Nilsson, and Ewert Bengtsson. Writing in the inaugural issue of the International Journal of Signal and Imaging Systems Engineering (2008, 1, 11-17), they pointed out that this is a problem that can be couched in terms of “finding cells, finding molecules, and finding patterns”.

The researchers explain that the molecular labelling techniques used by biologists in research into genetic diseases often just produce bright spots of light on an image of the sample. Usually, these bright spots, or signals, are formed by selective reactions that tag specific molecules of interest with fluorescent markers. Fluorescence microscopy is then used to take a closer look.

However, signals representing different types of molecules may be randomly distributed in the cells of a sample or may show systematic patterns. Such patterns may hint at specific, non-random localisations and functions of those molecules within the cell. The team suggests that the key to interpreting any patterns of bright spots relies on slicing up the image quickly, applying signal detection, and finally, analysing for patterns. This is not a trivial matter.

One solution to this non-trivial problem could lie in employing data mining tools, but rather than extracting useful information from large databases, those tools would be used to extract information from digital images of cells captured using fluorescence microscopy. The spatial distribution patterns so retrieved would allow labelled molecular targets to analysed that builds on the latest probing and staining techniques.

Biological processes could thus be studied at the level of single molecules, and with sufficient precision to distinguish even closely similar variants of molecules, the researchers say, revealing the intercellular and sub-cellular context of molecules that would otherwise go undetected among myriad other chemicals.

The team has demonstrated proof of principle by developing an image-based data mining system that can look for variants in the genetic information found in mitochondria (i.e. mitochondrial DNA, mtDNA). They point out that MtDNA is present in multiple copies in the mitochondria of the cell, is inherited together with cytoplasm during cell replication and provides an excellent system for testing the detection of point mutations. They add that the same approach might also be used to detect infectious organisms or to study tumours.

Wahlby, C., Karlsson, P., Henriksson, S., Larsson, C., Nilsson, M., Bengtsson, E. (2008). Finding cells, finding molecules, finding patterns. International Journal of Signal and Imaging Systems Engineering, 1(1), 11. DOI: 10.1504/IJSISE.2008.017768

Bird Flu Flap

Bird flu duckI’m not entirely convinced that bird flu (avian influenza) is going to be the next big emergent disease that will wipe out thousands, if not millions, of people across the globe. SARS, after all, had nothing to do with avians, nor does HIV, and certainly not malaria, tuberculosis, MRSA, Escherichia coli O157, or any of dozens of virulent strains of disease that have and are killing millions of people.

There are just so many different types of host within which novel microbial organisms and parasites might be lurking, just waiting for humans to impinge on their marginal domains, to chop down that last tree, to hunt their predators to extinction, and to wreak all-round environmental habitat on their ecosystems, that it is actually only a matter of time before something far worse than avian influenza crawls out from under the metaphorical rock.

In the meantime, there is plenty to worry about on the bird flu front, but perhaps nothing for us to get into too much of a flap over, just yet.

According to a report on Australia’s ABC news, researchers have found that the infamous H5N1 strain of bird flu (which is deadly to birds) can mix with the common-or-garden human influenza virus. The news report tells us worryingly that, “A mutated virus combining human flu and bird flu is the nightmare strain which scientists fear could create a worldwide pandemic.”

Of course, the scientists have not discovered this mutant strain in the wild, they have simply demonstrated that it can happen in the proverbial Petri dish.

Meanwhile, bootiful UK turkey company – Bernard Matthews Foods – has called for an early warning system for impending invasions of avian influenza. A feature in Farmers Weekly Interactive says the company is urging the government and poultry industry to work together to establish an early warning system for migratory birds that may carry H5N1 avian flu. “Armed with this knowledge, free range turkey producers would be able to take measures to avoid contact between wild birds and poultry.” That’s all well and good, but what if a mutant strain really does emerge that also happens to be carried by wild (and domesticated birds) or, more scarily by another species altogether? Then, no amount of H5N1 monitoring is going to protect those roaming turkeys.

While all this is going on, the Washington Post reports that the Hong Kong authorities announced Wednesday (June 10) that they are going to cull poultry in the territory’s retail markets because of fears of a dangerous bird flu outbreak. H5N1 virus was detected in chickens being sold from a stall in the Kowloon area and 2700 birds were slaughtered there to prevent its spread. In closely related news, the International Herald Tribune has reported that there has been an outbreak of bird flu in North Korea. “Bird flu has broken out near a North Korean military base in the first reported case of the disease in the country since 2005, a South Korean aid group said Wednesday.” But, note, “since 2005”, which means it happened before, and we didn’t then see the rapid emergence of the killer strain the media scaremongers are almost choking to see.

Finally, the ever-intriguing Arkansas Democrat Gazette reported, with the rather uninspiring headline: Test shows bird flu in hens. Apparently, a sample from a hen flock destroyed near West Fork, Arkansas, tested positive for avian influenza. A little lower down the page we learn that the strain involved is the far less worrisome H7N3. So, avian influenza is yet to crack the US big time. Thankfully.

Giving Obesity the CHOP

Obesity newsI am once again drawn to research from a team at the University of Westminster, a renowned institution that doles out so-called science degrees in homeopathy. This time the paper in question, published in the inaugural issue of the International Journal of Food Safety, Nutrition and Public Health (2008, vol 1, issue 1, pp 16-32) is on that perennial favourite: what to do about the obesity epidemic.

Ihab Tewfik, a senior lecturer in the School of Biosciences, at Westminster, reports that “the prevalence and severity of people suffering from obesity has increased markedly worldwide,” and adds that “The WHO declared obesity a ‘crisis of epidemic proportion’.” Nothing of which I can be too critical in those statements, except for one small point.

While obesity and the diseases and disorders for which it is purportedly a risk factor – type II diabetes, hypertension, hyperlipidemia, atherosclerosis, cardiovascular disease, stroke, and heart attack – are almost certainly on the increase in North America, Western Europe, and pockets of the Pacific Rim, the use of the term “worldwide” is rather ironic. This is especially true given that the WHO and other international organizations consistently report massive cases of disease, malnourishment and poor water supply across great tracts of the earth’s surface from Africa and South America to Asia and the former Soviet Union.

Anyway, Tewfik and colleagues have proposed a conceptual framework for a three-year intervention programme that could be adapted to the prevention of childhood obesity, which is a growing problem in many parts of the world, if not quite worldwide.

Ironically, they have named the framework, with one of those shoehorned acronyms, as CHOP, for Childhood Obesity Prevention and explain their approach as follows:

The approach is based on a behaviour modification model without giving foods. Family, school and children are essential counterparts to achieve meaningful improvement. Advocated by policies makers and embraced with favourite environmental factors, CHOP programme could be the conceptual framework for nutrition intervention that can be effectively integrated within the national health framework to attain public health goals.

Apparently, what this boils down to is giving children healthy foods, increasing physical activity and workout limits, limited TV and other screen times, implementing a non-food reward system, and allowing self-monitoring. As part of this approach schools will intervene in teaching children that they should eat five portions of fruit and vegetables each day, that they should cut the amount of fat they eat, limit their screen time and be active every day.

It all sounds like good, solid advice. Indeed, it’s the kind of advice the medical profession, nannyish governments, and even grandparents, have been offering for decades. Unfortunately, growing children are notoriously reluctant to take advice, especially when it comes to avoiding sweets and crisps, eating their greens, and switching off the Playstation (other gaming consoles are available).

The Westminster researchers, however, suggest their CHOP system would be convenient once the appropriate team, policies and resources have been successfully assembled. One has to wonder at a cost to whom these resources might be assembled. They do concede that, “In some circumstances this conceptual framework may be regarded to be too ambitious to attempt de novo within three years especially in some developing countries, where lack of access to health care, to drinkable water, to food, to education and housing is prevalent.”

It’s probably not necessary to implement it in places where food is in limited supply, surely. But, even in apparently developed nations, I’d suggest that costs will be severely prohibitive while children will be reluctant to partake (what positive rewards will replace treats and screen time?). Moreover, by their own admission, obesity is on a rapid climb among adults too and one has to wonder how these resources will be applied to persuade parents and carers of increasingly obese children will themselves be persuaded to take part if they do not appreciate the potential benefits.

Tewfik, I. (2008). Childhood Obesity Prevention (CHOP) programme: a conceptual framework for nutrition intervention. International Journal of Food Safety, Nutrition and Public Health, 1(1), 16. DOI: 10.1504/IJFSNPH.2008.018853

Evolution 2.0

Chimpanzee handEvolutionary science needs debugging. Apparently, there are a few issues that cannot be resolved with any precision when one asks questions like: What makes a human different from a chimp? Apparently, at the level of genetic sequences, systematic errors creep into any analysis, distorting our ancestry.

Now, researchers from the European Molecular Biology Laboratory’s European Bioinformatics Institute have revealed the source of these systematic errors in comparative genetic sequencing and have devised a new computational tool that avoids these errors and provides accurate insights into the evolution of DNA and protein sequences. Their work suggests that sequence turnover is much more common than assumed.

“Evolution is happening so slowly that we cannot study it by simply watching it,” explains Nick Goldman, group leader at EMBL-EBI, “we learn about the relationships between species and the course and mechanism of evolution by comparing genetic sequences.”

At the core of the evolutionary process are random changes in the DNA of all living things, incorrect copying of a single DNA base, or substitution, the loss of a base by deletion, and the inadvertent insertion of a new base. Such changes can lead to functional and structural changes in genes and proteins. If those mutations confer a reproductive advantage on the individual concerned then they will be carried on to the next generation. The accumulation of enough mutations over the course of many generations leads to the formation of new species. Reconstructing the history of these mutation events reveals the course of evolution.

Genetic mutationsTo compare two DNA sequences, researchers first align them by matching different sequences that share common ancestry. Insertions and deletions are then marked as gaps. This computationally draining process, originally designed for analysing proteins hence its limitations, is usually carried out progressively from several pairwise alignments. However, scientists cannot judge whether a particular length difference between two sequences is a deletion in one or an insertion in the other sequence. For correct alignment of multiple sequences, distinguishing between these two events is crucial, which is where those systematic errors creep in.

Our new method gets around these errors by taking into account what we already know about evolutionary relationships,” explains Ari Löytynoja, who developed the new computational tool in Goldman’s lab. “Say we are comparing the DNA of human and chimp and cannot tell if a deletion or an insertion happened. To solve this our tool automatically invokes information about the corresponding sequences in closely related species, such as gorilla or macaque. If they show the same gap as the chimp, this suggests an insertion in humans.”

The team’s work (published in the June 20 issue of Science) suggests that insertions are much more common than previously assumed, while deletion numbers have been overestimated. Now, that tools are being developed to reveal such issues our understanding of evolution can only become clearer.

Alcohol Causes Cancer

Wine corks (Photo by David Bradley)It’s quite illuminating that the following study has not yet reached the wider media. Without wishing to be too cynical, I do wonder whether that’s because the journal in which the work is published does not use a highly aggressive press office and marketing machine like so many other medical journals, which never seem to be out of the news. The results in this paper are just as important and the implications perhaps even more far reaching than many other results that attract instantaneous (under embargo) media attention. Anyway, take a look and judge for yourself, oh and let me know afterwards if you think the headline for this post is way off mark.

Alcohol blamed for oral cancer risk – A large-scale statistical analysis of mouth and throat cancer incidence over a long period of time has looked at possible correlations between exposure to industrial chemicals, dust and alcoholic beverages in a wide variety of individuals in different occupations across Finland. The perhaps surprising conclusion drawn is that alcohol consumption rather than industrial chemicals or dusts is the critical factor associated with this form of cancer. Get the full story in this week’s edition of my SpectroscopyNOW column here.

I suppose it’s a little ironic that in the same edition of Spec Now, I’m also writing about how to make beer taste fresher and last longer on the shelf. NMR spectroscopy, and a chromatography sniff test have yielded results that could help brewers improve the flavour and shelf-life of beer thanks to work by scientists in Venezuela. The team has identified alpha-dicarbonyls as important compounds that reduce beer’s flavour and point to a new approach to brewing beer that stays fresher, longer. Take a sip here…

Meanwhile, another subject of mixed messages regarding health benefits is that perennial favourite chocolate. To maintain the seductive and lustrous brown gloss of chocolate, so enticing to chocoholics the world over, food technologists must find a way to prevent fat bloom from forming on the surface and turning the surface an unappealing grey. Now, scientists from Canada and Sweden have found new clues to understanding the microstructure of chocolate and what happens when it turns grey with age. More…

Finally, some straight chemistry with absolutely no hint of biomedicine, health, or pharmaceutical implications (yet). A novel structure studied using X-ray crystallography hints at the possibility of a carbon atom that, at first site seems to be a little different from the conventional textbook view. Could the oldest rule of organic chemistry have been broken at last, or is low atomic separation being equated too keenly with the presence of a bond, or could there be something else afoot, as Steve Bachrach suggests? Read on…