Papes for the peeps – Aircraft Oil Contamination

This is the first in what I hope to be an ongoing series of posts explaining in simple terms what my scientific publications are about.

The first paper A Method for the Identification and Quantitation of Hydraulic Fluid Contamination of Turbine Engine Oils by Gas Chromatography – Chemical Ionisation Mass Spectrometry, in the journal Lubrication Science has just been made available in the print version but has been available online (paywalled) since June.

This paper is my first publication where I am the first author. First author publications are important for early career scientists as it is a part of demonstrating your capability as an independent researcher. First authorship also usually implies that you have done the majority of the experimental work included in the publication, and also written the bulk of the manuscript itself.

In a way, this publication is very unusual in that it is not in any way related to my main research project which I devote >90% of my time to. However, my lab has been searching for a solution to this problem for many years now, so it was an important piece of work in its own right.

What’s it all about?

It is a relatively common occurrence for the lubricant systems of aircraft to become contaminated through leaks, or human error. In many aircraft, the turbine engine lubrication system is separate from the hydraulic lubrication system, and employs two separate oils for these purposes. If the contamination occurs on a bulk level, it is easy to determine – the oils are different colours, have different flash points and viscosities. But when the contamination is at a low level, these differences become diluted to the point that they are no longer useful indicators. Another difference between the two oils is the distinctive set of additives which are used in each. But using these to detect contamination is also unsatisfactory, because the additives are not used in consistent amounts in the oil, so while the contamination may be detected, the amount can’t be calculated.

In order to detect low levels of contamination, we exploited the chemical differences between these two lubricants. Often when we are analysing our samples, we use an instrument called a ‘Mass Spectrometer’ (MS). The MS breaks apart molecules into pieces, which, oddly enough, is a good way of figuring out what they looked like before they broke apart! But when different oils are put into the MS, they break apart into the exact same fragments so we can’t tell them apart. Annoying. To overcome this, we introduced methane gas into the MS, which attaches onto the oil molecules before they are broken apart in the instrument. The inclusion of the methane means that now the different oils will break apart differently, and we can tell them apart. Using this method we can detect contamination down to 0.5%, which is equivalent to about a teaspoon’s worth of contaminant in a bucket.

A lot of this paper is about the chemical analysis technique that was used, and proving that the method actually works, which is really important when you are developing a complete new way of measuring something. But it’s not that interesting to explain, it basically involves running a lot of samples under specific conditions, repeated the same thing loads of times, and complete some statistical analysis on the data, to make sure that the method is robust. We also made sure that the method worked on oils which were fresh, and those which might have been used, heated or contaminated with water.

I’ve already seen the use of this new method have a real impact on the day to day operation of aircraft. It enables flight crews and aircraft maintainers to make more informed decisions about whether to fly or ground an aircraft with suspected contamination of the  turbine oil. This is something quite satisfying and rewarding for me, given that my main research project is slightly more esoteric, with applications likely to be many years in the future.

***Addendum 16/12/2012  The e-zine/newsletter Separations Now did a story on this paper, which can be found here.


The intersection of food and fuel chemistry – #foodchem carnival

CENtral science are hosting a blog carnival from November 11-18 about food chemistry, #foodchem. As a non-US chemist, the Thanksgiving theme of the #foodchem carnival is not all that relevant to me, so I haven’t followed the questions and am instead writing about something which I find interesting – an intersecting area of food and fuel chemistry.

At first glance, it might appear that foods and fuels could be almost as far apart as you can get in the world of applied chemistry, but as this Harris cartoon (third one down, right hand column) suggests, there are actually some interesting parallels.

Several years ago, I worked for a government organisation where I completed a significant project assessing the levels of trans fats in a range of supermarket foods. The fat content of the foods was profiled, down to the amounts and types of fats present (saturated, mono-, poly-unsaturated, and of the unsaturates, cis and trans isomers). This also allowed the checking of whether the labelling was correct, in the cases where fat levels were reported by the manufacturer either voluntarily or as required.

In order to determine the amounts and types of fats present, it must first be extracted, or separated from the rest of the ingredients in the food. Often, especially with porous foodstuffs like cakes, breads and biscuits, the fat can be easily recovered from the food by simply mixing with hexane or heptane, which easily dissolve fats. More difficult matrices such as chocolate, emulsified sauces and tinned meats required more exhaustive and technical extraction procedures.

The fat is isolated from the food still chemically intact, in the form of a triglyceride (see below). As the name suggests, it consists of a trio of fatty acids (in blue), linked together by glycerol (in pink). In this example, all of the fatty acids in the triglyceride are the same, but this doesn’t have to be the case. There can be a mixture of lengths, and varying degrees of saturation. For this reason, it is necessary to break apart the triglyceride into the separate fatty acids for analysis. This is done by performing a very simple chemical reaction called transesterification, and here is the where the link with fuels is revealed. Transesterification of oils and fats is the exact same reaction that is used to produce ‘biodiesel’, which is also known as FAME (fatty acid methyl esters) or ‘first generation alternate fuel’ within the fuel industry.

Image

It is clear that biodiesel can be easily made from fats and oils sourced from virtually anywhere, and for commercial production, ideally these should be non-food sources. Backyard biodiesel production is also not uncommon, and for many people with a connection to a restaurant or café producing waste oil, can be a cheap and sustainable way to run a forgiving, diesel-fuelled vehicle. This of course, should only be done with the correct equipment, PPE and sufficient knowledge to carry out the procedure safely.

In my lab, we have produced some small quantities of biodiesel from high fat foods, as part of an undergraduate student project and also a science outreach opportunity as we use the samples in our lab tours.

% total fat by weight Amount biodiesel produced
1×McDonalds double quarter pounder burger 17 ~50 mL
1×McDonalds large fries 19 ~25 mL
12×Krispy Kreme doughnuts 25 ~100 mL
1 pack (13) Scotch finger biscuits 21 ~20 mL

As you can see, this is far from an economical or efficient way to produce fuel (a dozen doughnuts wouldn’t even get you a kilometre down the road), but it is a great experiment for students to do to develop their wet chemistry techniques, and also think about the structures of common molecules. So next time you indulge in some fatty food, think about how with a quick chemical reaction you could convert your human fuel into fuel for your car or truck. Nifty!


An Evening with Brian Cox

A few years ago when I first discovered Professor Brian Cox, I wrote a gushy, crushy post about him on this very blog. Last night I had the good fortune of seeing him speak in person at the Melbourne Convention Centre, hosted by Melbourne Uni. And I’ll tell you what, all that gushy, crushyness came rushing back!

The packed theatre was led on a journey through cosmology and the vastness of the universe, to particle physics and the incomprehensible weirdness of quantum field theory. We experienced the engineering masterpiece which is the Large Hadron Collider, and contemplated the origin of life and the diversity it’s spawned.

As almost an intermission to the talk, we were treated to a reworking of Monty Python’s The Galaxy Song, which Brian Cox and Eric Idle had rewritten for Brian’s new series Wonders of Life.

This morning, thinking back to all of the fantastic scientists, science communicators and Nobel Prize winners I have seen speak over the years, I think Brian’s talk last night could be the best I’ve ever seen. He speaks with passion and eloquence, uses Powerpoint in the way God intended, and used fresh analogies rather than the same ones we see trotted out over and over. He managed to weave a series of disparate topics into a coherent story, making complex science into something so approachable and fascinating.

Did I mention that we had awesome seats too? About 10 rows from the front.

Image

Check the hashtag #BCoxMelb on twitter for some more photos and comments from the night. The memories from last night will definitely remain in my mind for a long time, he sets such a high standard for all future science communicators to aspire to.


Video Project for Science Communication Course

Here lies the post in which I show you the video project I made for the science communication course I mentioned in my previous post. It’s supposed to be a Catalyst-style video about my research, pitched at a general audience. I tried to incorporate a lot of the techniques that Graham talked about in the course; narration, to-camera pieces, zooming, cutting, ‘wallpaper’ (generic sciencey looking lab footage). I’ve also tried to avoid the use of any jargon, which consequently rules out talking about any of the actual details of my research, but thems the breaks eh?

I was really happy with the feedback I got from Graham which was that it was a good video, especially considering it is the first one I’ve made. He said I used images that people could relate to, and explained the science in a manner that people were able to understand. Before he called to tell me this, I’d sent him an email joking that I really needed his opinions because I was going to drop out of my PhD and enrol in film school depending on his feedback. He did check with me that I was just kidding about this, so that made sure I wasn’t going to get too much of a big head about it! The area that he said I could improve on was to make my to-camera and voiceover pieces more conversational and less scripted, which is valid and fair criticism. I certainly wasn’t prepared for how unnatural it would feel to be doing those things. I’ve even used the best takes from probably 5 or 6 repeats of each section, so clearly there is a lot of room for improvement.

So here it is, this is the first video I’ve EVER made. Seeing as this is the interwebz, I know you’re all going to be really nice and supportive about it and not say horrible things or troll me or make me feel embarrassed or point out mistakes that I’ve made. OK? THANKS.