Astronomy Without A Telescope – Stellar Quakes and Glitches

The upper crust of a neutron star is thought to be composed of crystallized iron, may have centimeter high mountains and experiences occasional ‘star quakes’ which may precede what is technically known as a glitch. These glitches and the subsequent post-glitch recovery period may offer some insight into the nature and behavior of the superfluid core of neutron stars.

The events leading up to a neutron star quake go something like this. All neutron stars tend to ‘spin down’ during their life cycle, as their magnetic field applies the brakes to the star’s spin. Magnetars, having particularly powerful magnetic fields, experience more powerful braking.

During this dynamic process, two conflicting forces operate on the geometry of the star. The very rapid spin tends to push out the star’s equator, making it an oblate spheroid. However, the star’s powerful gravity is also working to make the star conform to hydrostatic equilibrium (i.e. a sphere).

Thus, as the star spins down, its crust – which is reportedly 10 billion times the strength of steel – tends to buckle but not break. There may be a process like a tectonic shifting of crustal plates – which create ‘mountains’ only centimeters high, although from a base extending for several kilometres over the star’s surface. This buckling may relieve some of stresses the crust is experiencing – but, as the process continues, the tension builds up and up until it ‘gives’ suddenly.

The sudden collapse of a 10 centimeter high mountain on the surface of a neutron star is considered to be a possible candidate event for the generation of detectable  gravitational waves – although this is yet to be detected. But, even more dramatically, the quake event may be either coupled with – or perhaps even triggered by – a readjustment in the neutron’s stars magnetic field.

It may be that the tectonic shifting of crustal segments works to ‘wind ‘up’ the magnetic lines of force sticking out past the neutron star’s surface. Then, in a star quake event, there is a sudden and powerful energy release – which may be a result of the star’s magnetic field dropping to a lower energy level, as the star’s geometry readjusts itself. This energy release involves a huge flash of x and gamma rays.

In the case of a magnetar-type neutron star, this flash can outshine most other x-ray sources in the universe. Magnetar flashes also pump out substantial gamma rays – although these are referred to as soft gamma ray (SGR) emissions to distinguish them from more energetic gamma ray bursts (GRB) resulting from a range of other phenomena in the universe.

However, ‘soft’ is a bit of a misnomer as either burst type will kill you just as effectively if you are close enough. The magnetar SGR 1806-20 had one of largest (SGR) events on record in December 2004.

Along with the quake and the radiation burst, neutron stars may also experience a glitch – which is a sudden and temporary increase in the neutron star’s spin. This is partly a result of conservation of angular momentum as the star’s equator sucks itself in a bit (the old ‘skater pulls arms in’ analogy), but mathematical modeling suggests that this may not be sufficient to fully account for the temporary ‘spin up’ associated with a neutron star glitch.

Theoretical model of a neutron star's interior. An iron crystal core overlies a region of neutron-enriched atoms, below which is the degenerate matter of the core - where sub-atomic particles are stretched and twisted by magnetic and gravitational forces. Credit: Université Libre de Bruxelles (ULB).

González-Romero and Blázquez-Salcedo have proposed that an internal readjustment in the thermodynamics of the superfluid core may also play a role here, where the initial glitch heats the core and the post-glitch period involves the core and the crust achieving a new thermal equilibrium – at least until the next glitch.

Astronomy Without A Telescope – Making Sense Of The Neutron Zoo

The spectacular gravity of neutron stars offers great opportunities for thought experiments. For example, if you dropped an object from a height of 1 meter above a neutron star’s surface, it would hit the surface within a millionth of a second having been accelerated to over 7 million kilometers an hour.

But these days you should first be clear what kind of neutron star you are talking about. With ever more x-ray sensitive equipment scanning the skies, notably the ten year old Chandra space telescope, a surprising diversity of neutron star types are emerging.

The traditional radio pulsar now has a number of diverse cousins, notably magnetars which broadcast huge outbursts of high energy gamma and x-rays. The extraordinary magnetic fields of magnetars invoke a whole new set of thought experiments. If you were within 1000 kilometres of a magnetar, its intense magnetic field would tear you to pieces just from violent perturbation of your water molecules. Even at a safe distance of 200,000 kilometres, it will still wipe all the information off your credit card – which is pretty scary too.

Neutron stars are the compressed remnant of a star left behind after it went supernova. They retain much of that stars angular momentum, but within a highly compressed object only 10 to 20 kilometers in diameter. So, like ice skaters when they pull their arms in – neutron stars spin pretty fast.

Furthermore, compressing a star’s magnetic field into the smaller volume of the neutron star, increases the strength of that magnetic field substantially. However, these strong magnetic fields create drag against the stars’ own stellar wind of charged particles, meaning that all neutron stars are in the process of ‘spinning down’.

This spin down correlates with an increase in luminosity, albeit much of it is in x-ray wavelengths. This is presumably because a fast spin expands the star outwards, while a slower spin lets stellar material compress inwards – so like a bicycle pump it heats up. Hence the name rotation powered pulsar (RPP) for your ‘standard’ neutron stars, where that beam of energy flashing at you once every rotation is a result of the braking action of the magnetic field on the star’s spin.

It’s been suggested that magnetars may just be a higher order of this same RPP effect. Victoria Kaspi has suggested it may be time to consider a ‘grand unified theory’ of neutron stars where all the various species might be explained by their initial conditions, particularly their initial magnetic field strength, as well as their age.

It’s likely that the progenitor star of a magnetar was a particularly big star which left behind a particularly big stellar remnant. Thus, these rarer ‘big’ neutron stars might all begin their lives as a magnetar, radiating huge energies as its powerful magnetic field puts the brakes on its spin. But this dynamic activity means these big stars lose energy quickly, perhaps taking on the appearance of a very x ray luminous, though otherwise unremarkable, RPP later in their life.

Other neutron stars might begin life in less dramatic fashion, as the much more common and just averagely luminous RPPs, which spin down at a more leisurely rate – never achieving the extraordinary luminosities that magnetars are capable of, but managing to remain luminous for longer time periods.

The relatively quiet Central Compact Objects, which don’t seem to even pulse in radio anymore, could represent the end stage in the neutron star life cycle, beyond which the stars hit the dead line, where a highly degraded magnetic field is no longer able to apply the brakes to the stars’ spin. This removes the main cause of their characteristic luminosity and pulsar behaviour – so they just fade quietly away.

For now, this grand unification scheme remains a compelling idea – perhaps awaiting another ten years of Chandra observations to confirm or modify it further.

Astronomy Without A Telescope – Bringing The Planetology Home

We keep finding all these exoplanets. Our detection methods still only pick out the bigger ones, but we’re getting better at this all the time. One day in the not-too-distant future it is conceivable that we will find one with a surface gravity in the 1G range – orbiting its star in, what we anthropomorphically call, the Goldilocks zone where water can exist in liquid phase.

So let’s say we find such a planet and then direct all our SETI gear towards it. We start detecting faint morse-code like beeps – inscrutable, but clearly of artificial origin. Knowing us, we’ll send out a probe. Knowing us, there will be a letter campaign demanding that we adhere to the Prime Directive and consequently this deep space probe will include some newly developed cloaking technology, so that it will arrive at the Goldilocks planet invisible and undetectable.

The probe takes quite a while to get there and, in transit, receives indications that the alien civilization is steadily advancing its technology as black and white sitcoms start coming through – and as all that is relayed back to us we are able to begin translating their communications into a range of ‘dialects’.

By the time the probe has arrived and settles into an invisible orbit, it’s apparent a problem is emerging on the planet. Many of its inhabitants have begun expressing concern that their advancing technology is beginning to have planetary effects, with respect to land clearing and atmospheric carbon loading.

From our distant and detached viewpoint we are able to see that anyone on the planet who thinks they live in a stable and unchanging environment just isn’t paying attention. There was a volcano just the other week and their geologists keep finding ancient impact craters which have revised whole ecosystems in their planet’s past.

It becomes apparent that the planet’s inhabitants are too close the issues to be able to make a dispassionate assessment about what’s happening – or what to do about it. They are right that their technological advancement has bumped up the CO2 levels from 280ppm to over 380ppm within only 150 years – and to a level much higher than anything detectable in their ice core data, which goes back half a million years. But that’s about where the definitive data ends.

Credit: Rahstorf. NASA data is from the GISS Surface Temperature Analysis. Hadley Centre data is from the Met Office Hadley Centre, UK.

Advocates for change draw graphs showing temperatures are rising, while conservatives argue this is just cherry-picking data from narrow time periods. After all, a brief rise might be lost in the background noise of a longer monitoring period – and just how reliable is 150 year old data anyway? Other more pragmatic individuals point to the benefits gained from their advanced technology, noting that you have to break a few eggs to make an omelet (or at least the equivalent alien cuisine).

Back on Earth our future selves smile wryly, having seen it all before. As well as interstellar probes and cloaking devices, we have developed a reliable form of Asimovian psychohistory. With this, it’s easy enough to calculate that the statistical probability of a global population adopting a coordinated risk management strategy in the absence of definitive, face-slapping evidence of an approaching calamity is exactly (datum removed to prevent corrupting the timeline).

Astronomy Without A Telescope – Astronomy On Ice

Well, here’s a bit of a first for AWAT, because this is a story about a telescope. But it’s not your average telescope, being composed of a huge chunk of Antarctic ice with a very large cosmic ray muon filter attached to the back of it, which is called the Earth.

Commenced in 2005, the IceCube Neutrino Observatory is now approaching completion with recent installation of a key component DeepCore. With DeepCore, the Antarctic observatory is now able to observe the southern sky, as well as the northern sky.

Neutrinos have no charge and are weakly interactive with other kinds of matter, making them difficult to detect. The method employed by IceCube and by many other neutrino detectors is to look for Cherenkov radiation which, in the context of IceCube, is emitted when a neutrino interacts with an ice atom creating a highly energized charged particle, such as an electron or a muon – which shoots off at a speed greater than the speed of light, at least greater than the speed of light in ice.

The advantage of using Antarctic ice as a neutrino detector is that it is available in large volumes and thousands of years of sedimentary compression has squeezed most impurities out of it, making it a very dense, consistent and transparent medium. So, not only can you see the little flashes of Cherenkov radiation, but you can also make reliable predictions about the trajectory and energy level of the neutrino which caused each little flash.

The structure of IceCube incorporates strings of evenly spaced basketball-sized Cherenkov detectors lowered into the ice through drill holes to depths of nearly 2.5 kilometers. The DeepCore component is a more compact array of detectors, positioned in the clearest ice deep within IceCube, designed to enhance the sensitivity of IceCube for neutrino energies less than 1 TeV.

Prior to DeepCore being finished, it was only feasible to accurately measure the effects of upwardly moving neutrinos – that is, neutrinos that had already passed through the Earth and, if of a cosmic origin, had actually come from the northern sky. Any downwardly moving neutrinos from the southern sky were lost in noise created by cosmic ray muons which are able to penetrate IceCube, creating their own Cherenkov radiation without neutrinos being involved.

However, with the greater sensitivity offered by DeepCore, coupled with IceTop, which is a set of surface level Cherenkov detectors able to differentiate external muons entering from the surface, it is now possible for IceCube to make neutrino observations of the southern sky as well.

 
 
Adapted from Halzen (2009, arXiv:0911.2676)

IceCube’s key scientific goal is to identify neutrino point sources in the sky, which may include supernova and gamma ray bursts. Neutrinos are speculated to account for 99% of the energy release of a Type 2 supernova – suggesting that we may be missing a lot of information when we just focus on the electromagnetic radiation that is emitted.

It is also speculated that IceCube might provide indirect evidence of dark matter. The thinking is that if some dark matter was caught in the centre of the Sun, it would be annihilated by the extreme gravitational compression present there. Such an event should produce a sudden burst of high energy neutrinos, independent of the normal neutrino output resulting from solar fusion reactions. That’s a long chain of suppositions to gain indirect evidence of something, but we’ll see.

Astronomy Without A Telescope – The Nice Way To Build A Solar System

When considering how the solar system formed, there are a number of problems with the idea of planets just blobbing together out of a rotating accretion disk. The Nice model (and OK, it’s pronounced ‘niece’ – as in the French city) offers a better solution.

In the traditional Kant/Laplace solar nebula model you have a rotating protoplanetary disk within which loosely associated objects build up into planetesimals, which then become gravitationally powerful centres of mass capable of clearing their orbit and voila planet!

It’s generally agreed now that this just can’t work since a growing planetesimal, in the process of constantly interacting with protoplanetary disk material, will have its orbit progressively decayed so that it will spiral inwards, potentially crashing into the Sun unless it can clear an orbit before it has lost too much angular momentum.

The Nice solution is to accept that most planets probably did form in different regions to where they orbit now. It’s likely that the current rocky planets of our solar system formed somewhat further out and have moved inwards due to interactions with protoplanetary disk material in the very early stages of the solar system’s formation.

It is likely that within 100 million years of the Sun’s ignition, a large number of rocky protoplanets, in eccentric and chaotic orbits, engaged in collisions – followed by the inward migration of the last four planets left standing as they lost angular momentum to the persisting gas and dust of the inner disk. This last phase may have stabilised them into the almost circular, and only marginally eccentric, orbits we see today.

The hypothesized collision between 'Earth Mk 1' and Theia may have occurred late in rocky planet formation creating the Earth as we know it with its huge Moon of accreted impact debris

Meanwhile, the gas giants were forming out beyond the ‘frost line’ where it was cool enough for ices to form. Since water, methane and CO2 were a lot more abundant than iron, nickel or silicon – icy planetary cores grew fast and grew big, reaching a scale where their gravity was powerful enough to hold onto the hydrogen and helium that was also present in abundance in the protoplanetary disk. This allowed these planets to grow to an enormous size.

Jupiter probably began forming within only 3 million years of solar ignition, rapidly clearing its orbit, which stopped it from migrating further inward. Saturn’s ice core grabbed whatever gases Jupiter didn’t – and Uranus and Neptune soaked up the dregs. Uranus and Neptune are thought to have formed much closer to the Sun than they are now – and in reverse order, with Neptune closer in than Uranus.

And then, around 500 million years after solar ignition, something remarkable happened. Jupiter and Saturn settled into a 2:1 orbital resonance – meaning that they lined up at the same points twice for every orbit of Saturn. This created a gravitational pulse that kicked Neptune out past Uranus, so that it ploughed in to what was then a closer and denser Kuiper Belt.

The result was a chaotic flurry of Kuiper Belt Objects, many being either flung outwards towards the Oort cloud or flung inwards towards the inner solar system. These, along with a rain of asteroids from a gravitationally disrupted asteroid belt, delivered the Late Heavy Bombardment which pummelled the inner solar system for several hundred million years – the devastation of which is still apparent on the surfaces of the Moon and Mercury today.

Then, as the dust finally settled around 3.8 billion years ago and as a new day dawned on the third rock from the Sun – voila life!

Astronomy Without A Telescope – One Potato, Two Potato

Sometimes it’s good to take a break from mind-stretching cosmology models, quantum entanglements or events at 10-23 seconds after the big bang and get back to some astronomy basics. For example, the vexing issue of the potato radius. 

At the recent 2010 Australian Space Science Conference, it was proposed by Lineweaver and Norman that all naturally occurring objects in the universe adopt one of five basic shapes depending on their size, mass and dynamics. Small and low mass objects can be considered Dust – being irregular shapes governed primarily by electromagnetic forces. 

Next up are Potatoes, being objects where accretion by gravity begins to have some effect, though not as much as in the more massive Spheres – which, to quote the International Astronomical Union’s second law of planets, has sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape

Objects of the scale of molecular dust clouds will collapse down into Disks where the sheer volume of accreting material means that much of it can only rotate in a holding pattern around and towards the centre of mass. Such objects may evolve into a star with orbiting planets (or not), but the initial disk structure seems to be a mandatory step in the formation of objects at this scale. 

At the galactic scale you may still have disk structures, such as a spiral galaxy, but usually such large scale structures are too diffuse to form accretion disks and instead cluster in Halos – of which the central bulge of a spiral galaxy is one example. Other examples are globular clusters, elliptical galaxies and even galactic clusters. 

The proposed five major forms that accumulated matter adopts in our universe. Credit: Lineweaver and Norman.

The authors then investigated the potato radius, or Rpot, to identify the transition point from Potato to Sphere, which would also represent the transition point from small celestial object to dwarf planet. Two key issues emerged in their analysis. 

Firstly, it is not necessary to assume a surface gravity of a magnitude necessary to generate hydrostatic equilibrium. For example, on Earth such rock crushing forces only act at 10 kilometres or more below the surface – or to look at it another way you can have a mountain on Earth the size of Everest (9 kilometres), but anything higher will begin to collapse back towards the planet’s roughly spheroid shape. So, there is an acceptable margin where a sphere can still be considered a sphere even if it does not demonstrate complete hydrostatic equilibrium across its entire structure. 

Secondly, the differential strength of molecular bonds affects the yield strength of a particular material (i.e. its resistance to gravitational collapse). 

On this basis, the authors conclude that Rpot for rocky objects is 300 kilometres. However, Rpot for icy objects is only 200 kilometres, due to their weaker yield strength, meaning they more easily conform to a spheroidal shape with less self-gravity. 

Since Ceres is the only asteroid with a radius that is greater than Rpot for rocky objects we should not expect any more dwarf planets to be identified in the asteroid belt. But applying the 200 kilometre Rpot for icy bodies, means there may be a whole bunch of trans-Neptunian objects out there that are ready to take on the title.

Astronomy Without A Telescope – Is An Anomalous Anomaly A Normality?

The lack of any flyby anomaly effect when the Rosetta spacecraft passed Earth in November 2009 is what, an anomaly? No. Anomalies arise when there is a mismatch between a predicted and an observed value. When it happens our first thought shouldn’t be that OMG there’s something wrong with physics! We should probably start by reviewing whether we really got the math right.

The flyby anomaly story starts with the Galileo spacecraft‘s flyby of Earth in December 1990 – where it was measured to have gained a speed increase (at least, an increase over the predicted value) of 2.5 millimeters per second at perigee. In its second pass in December 1992, the predicted value was the same as the observed value, although it has been suggested that atmospheric drag effects confound any analysis of this particular flyby.

The next, and biggest anomaly so far detected, was the NEAR spacecraft‘s flyby in 1998 (a whopping 7.2 millimeters per second at perigee increase over the predicted value). After that you have Rosetta showing an anomaly on its first flyby in 2005. Then a quantitative formula which aimed to model the various flybys to date was developed by Anderson et al in 2007 – predicting a small but detectable speed increase would be found in Rosetta’s second fly-by of 13 November 2007. However (or should I say anomalously), no such increase was detected in this, or in Rosetta’s third (2009), pass.

So, on balance, our spacecraft (and often the same spacecraft) are more likely to behave as predicted than to behave anomalously. This reduces (though not negates) the likelihood of the anomaly being anything of substance. One might sagely state that the intermittent absence of an anomaly is not in itself anomalous.

More recently, Mbelek in 2009 has proposed that the anomalous flyby data (including Anderson et al’s formula) can be explained by a more rigorous application of special relativity principles, concluding that ‘spacecraft flybys of heavenly bodies may be viewed as a new test of SR which has proven to be successful near the Earth’. If such recalculated predicted values match observed values in future flybys, that would seem to be that.

Pioneer 10 - launched in 1972 and now making its way out towards the star Aldebaran, which it should pass in about 2 million years. Credit: NASA

Then there’s the Pioneer anomaly. This has no obvious connection with the flyby anomaly, apart from a common use of the word anomaly, which gives us another epistemological maxim – two unrelated anomalies do not one bigger anomaly make.

Between around 20 and 70 AU out from Earth, Pioneer 10 and 11 both showed tiny but unexpected decelerations of around 0.8 nanometers per second2 – although again we are just talking about an observed value that differed from a predicted value.

Some key variables not considered in calculating the original predicted value are radiation pressure from sunlight-heated surfaces, as well as internal radiation generated from the spacecrafts’ own (RTG) power source. A Planetary Society update of an ongoing review of the Pioneer data indicated that revised predicted values now show less discrepancy from the observed values. Again, this doesn’t yet negate the anomaly – but given the trend for more scrutiny equals less discrepancy, it’s fair to say that this anomaly is also becoming less substantial.

Don’t get me wrong, this is all very useful science, teaching us more about how our spacecraft operate out there in the field. I am just suggesting that when faced with a data anomaly perhaps our first reaction should be Doh! rather than OMG!

Astronomy Without A Telescope – Say No To Mass Extinction

Artist's impression of a gravity tug - a species and ecosystem saving device we haven't built yet. Credit: Durda/BBC News

[/caption]

You may have heard that there is an 86 per cent chance that in a mere million years or so Gliese 710 will drift close enough to the solar system to perturb the Oort cloud and perhaps send a rain of comets down into the inner solar system. 

Also, you have probably heard that there are hints of a certain periodicity in mass extinction events, perhaps linked to the solar system moving through the denser parts of the galactic disk, increasing the probability of similar close encounters. 

So, the big bad is coming… sometime. It might just be a stray asteroid that’s in the wrong place at the wrong time and have little to do with what’s happening outside the solar system. In any case, we need to stay calm and carry on – and maybe print the following handy survival tips on a fridge magnet.  

Idealised fridge magnet - for us or whoever comes next.

Immediate action: Fund sky surveys.

The Spaceguard Survey is underway aiming to identify near Earth objects down to the size of 140 meters. At present the survey might be finished in ten or fifteen years and it completely missed two small objects which are thought to have hit Earth in 2002 with impact energies approaching half a kiloton. 

Uh, anyone think we could be doing more in this space? 

Medium term action (0 – 10 years): Evacuate the area 

The 2010 National Academy of Science (NAS) report uses the strange term civil defence, but really it just means run for your life (i.e. evacuate the anticipated impact site). City destroyers in the 140 meter plus range may only hit Earth every 30,000 years or so, but it doesn’t hurt to be ready. 

Mass extinction objects in the ten kilometer range may only come every 65 million years or so. If it’s one of these… bummer. 

Long-term action (10 years plus): Call Roger Ramjet   

If we do have around 10 years notice, there’s maybe enough time to launch some of the nifty technology solutions we have at least developed on paper. Gravity tugs and mirror bees and various other deflection devices are recommended to deflect objects threatening to pass through a gravitational keyhole and shift onto a collision course next time around. 

If the object is already on collision course, no-one’s ruling out ‘instantaneous force’ (IF) options, which are either crashing something into it (‘kinetic impact’) or just nuking it – although the NAS report notes a 500% uncertainty about the possible trajectory change resulting from an IF. Ideally, a ‘full deflection campaign’ involves an IF primary deflection followed by subsequent shepherding of one or more fragments onto a safer trajectory via your preferred deflection device.

And look, if it does all goes bad at least the next order of intelligent Earthlings might dig up all these fridge magnets with mysterious symbols printed on them and be able to figure out where we went wrong. My money is on the birds. 

Recommended reading: 

The Association of Space Explorers’ International Panel (chaired by Russell ‘Rusty’ Schweickart) report. Asteroid Threats: A Call For Global Response. 

 National Research Council report. Defending Planet Earth: Near-Earth Object Surveys and Hazard Mitigation Strategies. Final Report.

Astronomy Without A Telescope – How To Impress An Alien (Or Not)

Aerial view of the 300 meter diameter Arecibo radio telescope dish

[/caption]

It’s about fifty years since Frank Drake sent out our first chat request to the wider universe. I say about as I think the official date is 11 April 1960 – but I notice a lot of fifty year anniversary blogs and interviews are already being published, so what the heck, I’m not waiting either.

While no-one is really concerned that we haven’t had an answer back yet, it is a little despondent to have scanned the skies for someone else’s chat request all this time and found nothing.

In a recent New Scientist interview (actually January 2010 – they were really getting in early), Drake refers to his equation delivering an answer in the order of one in 10 million stars having an advanced civilization – and he uses that statistic to indicate it’s too early to think we have done a statistically adequate scan yet.

Nonetheless, the chances of there being advanced civilizations near enough to enable a future United Federation of Planets already looks doubtful.

Drake’s initial communication efforts in Project Ozma were small scale, but his clever and carefully constructed Arecibo message out to Messier 13 (a globular cluster of approximately 300,000 stars) in 1974 aroused some criticism that telling the aliens where we are might result in an invasion.

This is a little implausible, since Messier 13 is 25,000 light years away. By the time the invasion fleet arrives we will either be long gone or have spent the intervening period developing the technology to blast them out of the sky if they don’t turn back immediately.

Actually, that’s probably an important consideration if we ever decide to invade someone. We will need to take a couple of universities along to keep our technology advancing ahead of theirs. However, if we are travelling near the speed of light, the time differential means that they will get ahead anyway. Hmm…

The Arecibo message composed of 1679 bits, being the product of two prime numbers 73 and 23 (i.e. the number of rows and columns). Impressive, huh?

Anyway, here in the 21st century, I want to suggest that more attention should be given to us just not looking stupid. There’s already all the bad TV out there. We can fairly claim that all that was never meant for alien consumption, but recently we advanced humans have quite deliberately transmitted a Beatles song to Polaris and sent a bunch of text messages to Gliese 581. I mean, huh?

Polaris, being a Cepheid variable – and in any case a short-lived and already dying supergiant – was probably never stable enough to support planets, so we probably got away with that one. However, there’s no getting around us sending text messages to Gliese 581c in 2008 (from Ukraine) and subsequently following that up with another set blasted at 581d in 2009 (from Australia, sorry…).

This was because when we recalculated, it was apparent that the exoplanet 581d was more likely to be in the habitable zone of its star than 581c. Hopefully those 20 light year distant aliens will appreciate that the inconsequential shift in the main focus of those two transmissions is an indication of our extreme cleverness.

See, it’s a bit like reading Shakespeare to a dolphin. With no comprehension of the language, you will just look like someone who is content to sit for hours making funny noises while dangling your feet in a pool. But with a bit of comprehension, the dolphin can be reasonably expected to reply – hey Brainiac, I’m a dolphin, what’s forsooth mean?

There are aliens among us who already think we’re a bit daft. How about we first check in with Frank Drake next time we feel like shouting out the window?

Astronomy Without A Telescope – Home Made Quark-Gluon Soup

The most powerful operational heavy-ion collider in the world, the Relativistic Heavy Ion Collider (RHIC) recently recorded the highest ever temperature created in an Earth-based laboratory of 4 trillion Kelvin. Achieved at the almost speed of light collision of gold ions, this resulted in the temporary existence of quark-gluon soup – something first seen at about ten to the power of minus twelve of the first second after the big bang.

And sure, the Large Hadron Collider (LHC) may one day soon be the most powerful heavy-ion collider in the world (although it will spend most of its time investigating proton to proton collisions). And sure, maybe it’s going to generate a spectacular 574 TeV when it collides its first lead ions. But you have to win the game before you get the trophy.

To give credit where it’s due, the LHC is already the most powerful particle collider in the world – having achieved proton collision energies of 2.36 TeV in late 2009. And it should eventually achieve proton collision energies of 14 TeV, but that will come well after its scheduled maintenance shutdown in 2012, ahead of achieving its full design capabilities from 2013. It has already circulated a beam of lead ions – but we are yet to see an LHC heavy ion collision take place.

So, for the moment it’s still RHIC putting out all the fun stuff. In early March 2010, it produced the largest ever negatively charged nucleus – which is anti-matter, since you can only build matter nuclei from protons and/or neutrons which will only ever have a positive or a neutral charge.

This antimatter nucleus carried an anti-strange quark – which is crying out for a new name… mundane quark, conventional quark? And since the only matter nuclei containing strange quarks are hypernuclei, RHIC in fact created an antihypernucleus. Wonderful.

Then there’s the whole quark-gluon soup story. Early experiments at RHIC reveal that this superhot plasma behaves like a liquid with a very low viscosity— and may be what the universe was made of in its very early moments.  There was some expectation that melted protons and neutrons would be so hot that surely you would get a gas – but like the early universe, with everything condensed into a tiny volume, you get a super-heated liquid (i.e. soup).

An aerial view of the Relativistic Heavy Ion Collider (RHIC) in Upton, NY. The Alternating Gradient Synchrotron (AGS) built in the 1960s now works as a pre-accelerating injector for the larger RHIC.

The LHC hopes to deliver the Higgs, maybe a dark matter particle and certainly anti-matter and micro black holes by the nano-spoonful. And after that, there’s talk of building the Very Large Hadron Collider, which promises to be bigger, more powerful and more expensive.

But if that project doesn’t fly, we can still ramp up the existing colliders. Ramping up a particle collider is an issue of luminosity, where the desired outcome is a more concentrated and focused particle beam – with an increased energy density achieved by cramming more particles into a cross section of the beam you are sending around the particle accelerator. Both RHIC and the LHC have plans to undertake an upgrade to achieve an increase of their respective luminosities by up to a factor of 10. If successful, we can look forward to RHIC II and the Super Large Hadron Collider coming online sometime after 2020. Fun.