Why There’s More Matter Than Antimatter in the Universe

kek.thumbnail.jpg

In the first few moments of the Universe, enormous amounts of both matter and antimatter were created, and then moments later combined and annihilated generating the energy that drove the expansion of the Universe. But for some reason, there was an infinitesimal amount more matter than anti matter. Everything that we see today was that tiny fraction of matter that remained.

But why? Why was there more matter than antimatter right after the Big Bang? Researchers from the University of Melbourne think they might have an insight.

Just to give you an idea of the scale of the mystery facing researchers, here’s Associate Professor Martin Sevior of the University of Melborne’s School of Physics:

“Our universe is made up almost completely of matter. While we’re entirely used to this idea, this does not agree with our ideas of how mass and energy interact. According to these theories there should not be enough mass to enable the formation of stars and hence life.”

“In our standard model of particle physics, matter and antimatter are almost identical. Accordingly as they mix in the early universe they annihilate one another leaving very little to form stars and galaxies. The model does not come close to explaining the difference between matter and antimatter we see in the nature. The imbalance is a trillion times bigger than the model predicts.”

If the model predicts that matter and antimatter should have completely annihilated one another, why is there something, and not nothing?

The researchers have been using the KEK particle accelerator in Japan to create special particles called B-mesons. And it’s these particles which might provide the answer.

Mesons are particles which are made up of one quark, and one antiquark. They’re bound together by the strong nuclear force, and orbit one another, like the Earth and the moon. Because of quantum mechanics, the quark and antiquark can only orbit each other in very specific ways depending on the mass of the particles.

A B-meson is a particularly heavy particle, with more than 5 times the mass of a proton, due almost entirely to the mass of the B-quark. And it’s these B-mesons which require the most powerful particle accelerators to generate them.

In the KEK accelerator, the researchers were able to create both regular matter B-mesons and anti-B-mesons, and watch how they decayed.

“We looked at how the B-mesons decay as opposed to how the anti-B-mesons decay. What we find is that there are small differences in these processes. While most of our measurements confirm predictions of the Standard Model of Particle Physics, this new result appears to be in disagreement.”

In the first few moments of the Universe, the anti-B-mesons might have decayed differently than their regular matter counterparts. By the time all the annihilations were complete, there was still enough matter left over to give us all the stars, planets and galaxies we see today.

Original Source: University of Melbourne News Release

Could Cosmic Rays Influence Global Warming?

sunset.thumbnail.jpg

The idea goes like this: Cosmic rays, originating from outside the Solar System, hit the Earth’s atmosphere. In doing so these highly energetic particles create microscopic aerosols. Aerosols collect in the atmosphere and act as nuclei for water droplet formation. Large-scale cloud cover can result from this microscopic interaction. Cloud cover reflects light from the Sun, therefore cooling the Earth. This “global dimming” effect could hold some answers to the global warming debate as it influences the amount of radiation entering the atmosphere. Therefore the flux of cosmic rays is highly dependent on the Sun’s magnetic field that varies over the 11-year solar cycle.

If this theory is so, some questions come to mind: Is the Sun’s changing magnetic field responsible for the amount of global cloud cover? To what degree does this influence global temperatures? Where does that leave man-made global warming? Two research groups have published their work and, perhaps unsurprisingly, have two different opinions…


I always brace myself when I mention “global warming”. I have never come across such an emotive and controversial subject. I get comments from people that support the idea that the human race and our insatiable desire for energy is the root cause of the global increases in temperature. I get anger (big, scary anger!) from people who wholeheartedly believe that we are being conned into thinking the “global warming swindle” is a money-making scheme. You just have to look at the discussions that ensued in the following climate-related stories:

But what ever our opinion, huge quantities of research spending is going into understanding all the factors involved in this worrying upward trend in average temperature.

Cue cosmic rays.

Researchers from the National Polytechnic University in the Ukraine take the view that mankind has little or no effect on global warming and that it is purely down to the flux of cosmic radiation (creating clouds). Basically, Vitaliy Rusov and colleagues run the analysis of the situation and deduce that the carbon dioxide content of the atmosphere has very little effect on global warming. Their observations suggest that global temperature increases are periodic when looking into the history of global and solar magnetic field fluctuations and the main culprit could be cosmic ray interactions with the atmosphere. Looking back over 750,000 years of palaeotemperature data (historic records of climatic temperature stored in ice cores sampled in the Northern Atlantic ice sheets), Rusov’s theory and data analysis draw the same conclusion, that global warming is periodic and intrinsically linked with the solar cycle and Earth’s magnetic field.

But how does the Sun affect the cosmic ray flux? As the Sun approaches “solar maximum” its magnetic field is at its most stressed and active state. Flares and coronal mass ejections become commonplace, as do sunspots. Sunspots are a magnetic manifestation, showing areas on the solar surface where the powerful magnetic field is up welling and interacting. It is during this period of the 11-year solar cycle that the reach of the solar magnetic field is most powerful. So powerful that galactic cosmic rays (high energy particles from supernovae etc.) will be swept from their paths by the magnetic field lines en-route to the Earth in the solar wind.

It is on this premise that the Ukrainian research is based. Cosmic ray flux incident on the Earth’s atmosphere is anti-correlated with sunspot number – less sunspots equals an increase in cosmic ray flux. And what happens when there is an increase in cosmic ray flux? There is an increase in global cloud cover. This is the Earth’s global natural heat shield. At solar minimum (when sunspots are rare) we can expect the albedo (reflectivity) of the Earth to increase, thus reducing the effect of global warming.

This is a nice bit of research, with a very elegant mechanism that could physically control the amount of solar radiation heating the atmosphere. However, there is a lot of evidence out there that suggests carbon dioxide emissions are to blame for the current upward trend of average temperature.

Prof. Terry Sloan and Prof. Sir Arnold Wolfendale from the University of Lancaster and University of Durham, UK step into the debate with the publication “Testing the proposed causal link between cosmic rays and cloud cover“. Using data from the International Satellite Cloud Climatology Project (ISCCP), the UK-based researchers set out to investigate the idea that the solar cycle has any effect on the amount of global cloud cover. They find that cloud cover varies depending on latitude, demonstrating that in some locations cloud cover/cosmic ray flux correlates in others it does not. The big conclusion from this comprehensive study states that if cosmic rays in some way influence cloud cover, at maximum the mechanism can only account for 23 percent of cloud cover change. There is no evidence to suggest that changes in the cosmic ray flux have any effect on global temperature changes.

The cosmic-ray, cloud-forming mechanism itself is even in doubt. So far, there has been little observational evidence of this phenomenon. Even looking at historical data, there has never been an accelerated increase in global temperature rise than the one we are currently observing.

So could we be clutching at straws here? Are we trying to find answers to the global warming problem when the answer is already right in front of us? Even if global warming can be amplified by natural global processes, mankind sure ain’t helping. There is a known link between carbon dioxide emission and global temperature rise whether we like it or not.

Perhaps taking action on carbon emissions is a step in the right direction while further research is carried out on some of the natural processes that can influence climate change, as for now, cosmic rays do not seem to have a significant part to play.

Original source: arXiv blog

A Step Toward Quantum Communications with Space

egs.thumbnail.jpg

Sending quantum information in the form of qubits (quantum bits) have been successfully carried out for years. Firing indecipherable packets of quantum data (or quantum states) via photons can however degrade the message as the photons travel through the dense atmosphere. Also, the distance of transmitting data is severely hindered by other factors such as the curvature of the Earth. Now, for the first time, Italian scientists have carried out a successful mock single-photon exchange between Earth and a satellite orbiting at an altitude of 1485 km. Although transmission may be restricted here on Earth, the use of satellites will greatly increase the range of such a system, possibly beginning an era of long-distance quantum communication with space.

The key advantage to quantum communications is that it is perfectly secure from being hacked. In a world of security-conscious information transmission, the possibility of sending information hidden in the quantum states of photons would be highly desirable. A major drawback of sending encoded photos here on Earth is the degradation of data as the photons are scattered by atmospheric particles. The current record stands at 144 km for an encoded photon to travel along its line of sight without losing its quantum code. That distance can be increased by firing encoded photons along optical fibres.

But what if you used satellites as nodes to communicate the encoded photons through space? By shooting the photons straight up, they need only travel through 8 km of dense atmosphere. This is exactly what Paolo Villoresi and his team at the Department of Information Engineering, University of Padova with collaborators in other institutes in Italy and Austria hoped to achieve. In fact, they have already tested the “single-photon exchange” between a ground station and the Japanese Experimental Geodetic Satellite Ajisai with some good results.

Weak laser pulses, emitted by the ground-based station, are directed towards a satellite equipped with cube-corner retroreflectors. These reflect a small portion of the pulse, with an average of less-than-one photon per pulse directed to our receiver, as required for the faint-pulse quantum communication.” – From “Experimental verification of the feasibility of a quantum channel between Space and Earth“, Villoresi et al..

The communication between satellite and observatory
They achieved this feat by using existing Earth-based laser ranging technology (at the Matera Laser Ranging Observatory, Italy) to direct a weak source of photons at the Ajisai, spherical mirrored satellite (pictured top). As the powerful laser ranging beam pinpointed the satellite, it was switched off to allow the weaker encoded laser to fire pulses of data. The two lasers could easily be switched to be sure the Ajisai was receiving the photons. Only a tiny fraction of the pulses were received back at the observatory, and, statistically speaking, the requirement of less than one photon return per laser pulse for quantum communications was achieved.

This is the first step of many toward quantum communications, and it by no means demonstrates the quantum entanglement between two photons (this situation is described in great detail by one of the collaborators in a separate publication) – now that would be the ultimate form of quantum data transmission!

Source: arXiv, arXiv blog

Do Advanced Civilizations Communicate with Neutrinos?

ice_cube.thumbnail.jpg

It’s one of the biggest questions in all humanity: are we alone in the Universe? Either way, the answer is significant. And so, scientists are searching for intelligence out there. Huge arrays of radio telescopes, like the Allen Array scan the skies for radio broadcasts. And researchers have also proposed that aliens might be using lasers to communicate with us. A Russian researcher is proposing another way that aliens might be communicating with us – with neutrinos.

To borrow a quote from the Hitchhiker’s Guide to the Galaxy, “Space is big. You just won’t believe how vastly, hugely, mind- bogglingly big it is.” When you’re attempting to communicate across the vast distances of space, you need huge amounts of energy. Just look at a star, even though it’s generating an incomprehensible amount of energy every second, the brightness drops dramatically with distance.

Instead of broadcasting in all directions, the other strategy is to focus your communications towards a specific location. A targeted beam of radio waves or laser light towards another star still requires an enormous amount of energy, but it’s less.

To save energy, alien civilizations might not be using radio or optical light at all, they might be communicating in a completely different way, with neutrinos.

Researcher Z. K. Silagadze at the Budker Institute of Nuclear Physics and Novosibirsk State University recently posted this idea to the Arxiv pre-press mailing list. His article is called SETI and Muon Collider.

It might sound like science fiction, but scientists are starting to understand how to generate beams of neutrinos – by creating beams of muons. Beams of these unstable particles can be generated in large particle accelerators. The muon beam decays quickly into a focused beam of neutrinos that can travel for light years and still remain remarkably coherent. A beam fired at relatively nearby star Tau Ceti, 12 light-years away, would open up to about 600 astronomical units across – enough to bathe the whole system in neutrinos that could be tracked back to a specific source star.

Finding neutrinos here on Earth is difficult. We’ve got an incredible amount of neutrinos stream towards us from the Sun. In fact, you’ve got billions of neutrinos passing through your body every second and you never feel them because never interact. It takes a huge vat of water, protected underground from other radiation and a suite of sensitive detectors. And even then, they only turn up a few thousand neutrinos a year.

In fact, a neutrino can pass through light-years of pure lead and not even notice.

But there are some advantages. Neutrino detectors are omnidirectional – they don’t have to be targeted in a specific direction to “tune in” a signal coming from a star. If the stream of neutrinos is passing through the Earth, we should be able to detect it, and then track back the source after the fact.

Neutrino detectors are also sensitive to many different energy levels. They don’t have to scan specific frequencies, they can detect high energy neutrinos as easily as low-energy ones.

According to Silagadze, the newly developed IceCube neutrino observatory being built in Antarctica should have the sensitivity to spot neutrinos generated on purpose by alien civilizations – whether they’re targeting us specifically, or we’re just overhearing their conversations.

It has been suggested that advanced civilizations might deliberately choose neutrinos for communications because it shuts out the very young, and not mature civilizations from the galactic conversation.

But give us a few years, and we’ll be listening.

Original Source: Arxiv

Final Detector in Place at the Large Hadron Collider

smallwheel1.thumbnail.jpg

One of the most complicated construction projects ever attempted reached a major milestone today. The final large detector element for the ATLAS instrument was lowered into the Large Hadron Collider. And this baby’s big. Weighing in at 100 tonnes. When the collider finally comes online, this instrument will measure the cascade of particles generated in proton-proton collisions.

The ATLAS detector itself is enormous, weighing 7,000 tonnes and measuring 46 metres long, 25 metres high and 25 metres wide. It has 100 million sensors that will track all the particles that freeze out when protons are smashed together at tremendous energies.

And so today, the final element for ATLAS was plugged into its permanent home. It’s known as a “small wheel”, and there are two of them in the detector. Compared to the full ATLAS instrument, it only weighs 100 tonnes, and measures a mere 9.3 metres across.

Since the whole detector is located deep underground, engineers had to lower each piece down a 100 metre shaft. And they’ve been installing pieces this way since 2003. In the case of the small wheel, it was even harder to get it down.

“One of the major challenges is lowering the small wheel in a slow motion zigzag down the shaft,” explained Ariella Cattai, leader of the small wheel team, “and performing precision alignment of the detector within a millimetre of the other detectors already in the cavern.”

With all of ATLAS’ parts in place, it’s time to enter the commissioning phase. Researchers will test all of the parts together in preparation for the first tests this Summer.

By this time next year, physicists might have many more answers about the nature of gravity, dark matter, and nature’s preference for matter over dark matter. And I’m sure they’ll have even more new questions. But that’s how science works.

Original Source: CERN News Release

Pluto’s Moons, Nix and Hydra, may have been Adopted

The discovery images of Nix (and Hydra) obtained by the Hubble Space Telescope. Credit: NASA, ESA, H. Weaver (JHU/APL), A. Stern (SwRI)

 

How many moons does Pluto have? The mini-moons of Pluto, Nix and Hydra, were discovered in 2005 (but named in 2006) during an observation campaign by the Hubble Space Telescope. The discovery of these mini-moons increase the number of natural satellites orbiting Pluto to three (including larger moon Charon). But where did these satellites come from? The current accepted theory on the formation on the large moon, Charon, is much like the theory supporting the creation of Earth’s Moon. It is thought that a large impact between two Large Kuiper Belt Objects chipped Charon away from a proto-Pluto, putting the chunk of Pluto mass into orbit. Over the years, tidal forces slowed the pair and Charon was allowed to settle into its present-day orbit. Recent theory suggests that Nix and Hydra are a by product of this collision, merely shattered fragments of the huge impact. But there are problems with this idea. Could Nix and Hydra have come from somewhere other than the Pluto-Charon impact?

The orbits of Plutos moons, Charon, Nix and Hydra (credit: NASA)
The small moons that orbit the Large Kuiper Belt Object (formerly classified as a planet) can be found about 48,700 kilometers and 64,800 kilometers from the surface of Pluto. The closest moon is called Nix and the farthest, Hydra. Nix has an orbital resonance of 4:1 with Charons orbit and the larger moon Hydra has a resonance of 6:1 (i.e. Nix will orbit Pluto once for every four of Charons orbits; Hydra will orbit Pluto once for every six of Charons orbits).

The reasons behind these mini-moon orbits are only just beginning to be understood, but it is known that their resonances with Charons orbit is rooted way back during the Pluto-system evolution. If we assume Hydra and Nix were formed from a massive Kuiper Belt Object collision, the easiest explanation is to assume they are whole fragments from the impact caught in the gravity of the Pluto-Charon system. However, due to the highly eccentric orbits that would have resulted from this collision, it is not possible that the two little moons could have evolved into a near-circular orbit, in near-corotational resonance with Charon.

So, could it be possible that the moons may have formed from the dust and debris resulting from the initial collision? If there was enough material produced, and if the material collided frequently, then perhaps Nix and Hydra were born from a cold disk of debris (rather than being whole pieces of rock), eventually coalescing and forming sizeable rocky moons. As there may have been a disk of debris, collisions with the orbiting Nix and Hydra would have also reduced any eccentricity in their orbits.

But there is a big problem with this theory. From impact simulations, the post-impact disk of debris surrounding Pluto would have been very compact. The disk could not have reached as far as the present-day orbits of the moons.

One more theory suggests that perhaps the moons were created in a post-impact disk, but very close to Pluto, and then through gravitational interactions with Charon, the orbits of Nix and Hydra were pulled outward, allowing them to orbit far from the Pluto-Charon post-impact disk. According to recent computer simulations, this doesn’t seem to be possible either.

To find an answer, work by Yoram Lithwick and Yanqin Wu (University of Toronto) suggest we must look beyond the Pluto-Charon system for a source of material for Nix and Hydra. From simulations, the above theories on the creation of the small moons being started by material ejected from a large collision between two Large Kuiper Belt Objects (creating Pluto and Charon) are extremely problematic. They do not correctly answer how the highly eccentric orbits Nix and Hydra would have from a collision could evolve into the near-circular ones they have today.

Lithwick and Wu go on to say that the circular, corotational resonant orbits of the two moons could be created from a Plutocentric disk of small bits of rock scooped up during Pluto’s orbit around the Sun. Therefore Nix and Hydra may have been formed from the rocky debris left over from the development of the Solar System, and not from a collision event creating Charon. This may hold true for the countless other Kuiper Belt Objects in orbit in the far reaches of the Solar System, no impact is necessary for the creation of the tiny moons now thought to be their satellites.

It is hoped that the New Horizons mission (launched January 21st, 2006) to the far reaches of the Solar System will reveal some of the questions that remain unanswered in the depths of our mysterious Kuiper Belt. Hopefully we will also find out whether Nix and Hydra are children of Pluto and Charon… or whether they were adopted.

Source: arXiv

Synthetic Black Hole Event Horizon Created in UK Laboratory

Researchers at St. Andrews University, Scotland, claim to have found a way to simulate an event horizon of a black hole – not through a new cosmic observation technique, and not by a high powered supercomputer… but in the laboratory. Using lasers, a length of optical fiber and depending on some bizarre quantum mechanics, a “singularity” may be created to alter a laser’s wavelength, synthesizing the effects of an event horizon. If this experiment can produce an event horizon, the theoretical phenomenon of Hawking Radiation may be tested, perhaps giving Stephen Hawking the best chance yet of winning the Nobel Prize.

So how do you create a black hole? In the cosmos, black holes are created by the collapse of massive stars. The mass of the star collapses down to a single point (after running out of fuel and undergoing a supernova) due to the massive gravitational forces acting on the body. Should the star exceed a certain mass “limit” (i.e. the Chandrasekhar limit – a maximum at which the mass of a star cannot support its structure against gravity), it will collapse into a discrete point (a singularity). Space-time will be so warped that all local energy (matter and radiation) will fall into the singularity. The distance from the singularity at which even light cannot escape the gravitational pull is known as the event horizon. High energy particle collisions by cosmic rays impacting the upper atmosphere might produce micro-black holes (MBHs). The Large Hadron Collider (at CERN, near Geneva, Switzerland) may also be capable of producing collisions energetic enough to create MBHs. Interestingly, if the LHC can produce MBHs, Stephen Hawking’s theory of “Hawking Radiation” may be proven should the MBHs created evaporate almost instantly.

Hawking predicts that black holes emit radiation. This theory is paradoxical, as no radiation can escape the event horizon of a black hole. However, Hawking theorizes that due to a quirk in quantum dynamics, black holes can produce radiation.
The principal of Hawking Radiation (source: http://library.thinkquest.org)
Put very simply, the Universe allows particles to be created within a vacuum, “borrowing” energy from their surroundings. To conserve the energy balance, the particle and its anti-particle can only live for a short time, returning the borrowed energy very quickly by annihilating with each other. So long as they pop in and out of existence within a quantum time limit, they are considered to be “virtual particles”. Creation to annihilation has net zero energy.

However, the situation changes if this particle pair is generated at or near an event horizon of a black hole. If one of the virtual pair falls into the black hole, and its partner is ejected away from the event horizon, they cannot annihilate. Both virtual particles will become “real”, allowing the escaping particle to carry energy and mass away from the black hole (the trapped particle can be considered to have negative mass, thus reducing the mass of the black hole). This is how Hawking radiation predicts “evaporating” black holes, as mass is lost to this quantum quirk at the event horizon. Hawking predicts that black holes will gradually evaporate and disappear, plus this effect will be most prominent for small black holes and MBHs.

So… back to our St. Andrews laboratory…

Prof Ulf Leonhardt is hoping to create the conditions of a black hole event horizon by using laser pulses, possibly creating the first direct experiment to test Hawking radiation. Leonhardt is an expert in “quantum catastrophes”, the point at which wave physics breaks down, creating a singularity. In the recent “Cosmology Meets Condensed Matter” meeting in London, Leonhardt’s team announced their method to simulate one of the key components of the event horizon environment.

Light travels through materials at different velocities, depending on their wave properties. The St. Andrews group use two laser beams, one slow, one fast. First, a slow propagating pulse is fired down the optical fiber, followed by a faster pulse. The faster pulse should “catch up” with the slower pulse. However, as the slow pulse passes through the medium, it alters the optical properties of the fiber, causing the fast pulse to slow in its wake. This is what happens to light as it tries to escape from the event horizon – it is slowed down so much that it becomes “trapped”.

We show by theoretical calculations that such a system is capable of probing the quantum effects of horizons, in particular Hawking radiation.” – From a forthcoming paper by the St. Andrews group.

The effects that two laser pulses have on eachother to mimic the physics within an event horizon sounds strange, but this new study may help us understand if MBHs are being generated in the LHCs and may push Stephen Hawking a little closer toward a deserved Nobel Prize.
Source: Telegraph.co.uk

The “Astronomical Unit” May Need an Upgrade as the Sun Loses Mass

sunmass.thumbnail.jpg

The Sun is constantly losing mass. Our closest star is shedding material through the solar wind, coronal mass ejections and by simply generating light. As the burning giant begins a new solar cycle, it continues to lose about 6 billion kilograms (that’s approximately 16 Empire State Building’s worth) of mass per second. This may seem like a lot, but when compared with the total mass of the Sun (of nearly 2×1030 kilograms), this rate of mass loss is miniscule. However small the mass loss, the mass of the Sun is not constant. So, when using the Astronomical Unit (AU), problems will begin to surface in astronomical calculations as this “universal constant” is based on the mass of the Sun…

The AU is commonly used to describe distances within the Solar System. For instance, one AU is approximately the mean distance from the Sun to Earth orbit (defined as 149,597,870.691 kilometres). Mars has an average orbit of 1.5AU, Mercury has an average of about 0.4AU… But how is the distance of one AU defined? Most commonly thought to be derived as the mean distance of the Sun-Earth orbit, it is actually officially defined as: the radius of an unperturbed circular orbit that a massless body would revolve about the Sun in 2Ï€/k days (that’s one year). There lies the problem. The official calculation is based on “k”, a constant based on the estimated constant mass of the Sun. But the mass of the Sun ain’t constant.

As mass is lost via the solar wind and radiation (radiation energy will carry mass from the Sun due to the energy-mass relationship defined by Einstein’s E=mc2), the value of the Astronomical Unit will increase, and by its definition, the orbit of the planets should also increase. It has been calculated that Mercury will lag behind it’s current orbital position in 200 years time by 5.5 km if we continue to use today’s AU in future calculations. Although a tiny number – astrophysicists are unlikely to lose any sleep over the discrepancy – a universal constant should be just that, constant. There are now calls to correct for this gradual increase in the value of the AU by discarding it all together.

[The current definition is] fine for first-year science courses. But for scientific and engineering usage, it is essential to get it right.” – Peter Noerdlinger, astronomer at St Mary’s University, Canada.

Correcting classical “constants” in physics is essential when high accuracy is required to calculate quantities over massive distances or long periods of time, therefore the AU (as it is currently defined) may be demoted as a general description of distance rather than a standard scientific unit.

Source: New Scientist

Large Hadron Collider Could Create Wormholes: a Gateway for Time Travelers?

wormhole_travel1.thumbnail.jpg

As we get closer to the grand opening of the Large Hadron Collider (LHC) near Geneva, Switzerland, it seems the predictions as to what we might get from the high energy particle accelerator are becoming more complex and outlandish. Not only could the LHC generate enough energy to create particles that exist in other dimensions, it may also produce “unparticles“, a possible source for dark matter. Now, the energy may be so focused that even the fabric of space-time may be pulled apart to create a wormhole, not to a different place, but a different time. Also, if there are any time travellers out there, we are most likely to see them in a few weeks…

If you could travel back in time, where would you go? Actually it’s a trick question: you couldn’t travel back in time unless there was a time “machine” already built in the past. The universe’s very first time traveller would therefore only be able to travel back to when the machine he/she was using was built. This is one restriction that puts pay to those romantic ideas that we could travel back in time to see the dinosaurs; there were no time machines back then (that we know of), so nothing to travel back to. And until we create a time machine, we won’t be seeing any travelers any time soon.

However, Prof Irina Aref’eva and Dr Igor Volovich, mathematical physicists at the Steklov Mathematical Institute in Moscow believe the energies generated by the subatomic collisions in the LHC may be powerful enough to rip space-time itself, spawning wormholes. A wormhole not only has the ability to take a shortcut between two positions in space, it can also take a shortcut between two positions in time. So, the LHC could be the first ever “time machine”, providing future time travelers with a documented time and place where a wormhole “opened up” into our time-line. This year could therefore be “Year Zero”, the base year by which time travel is limited to.

Relativity doesn’t dispute this idea, but the likelihood of a person passing through time is slim-to-impossible when the dimensions of a possible wormhole will be at the sub-atomic level at best and it would only be open for a brief moment. Testing for the presence of a man-made wormhole would be difficult even if we knew what we were looking for (perhaps a small loss in energy during collision, as energy escapes through the wormhole?).

As if that didn’t discourage you from hoping to use wormholes for time travel, Dr Brian Cox of the University of Manchester says: “The energies of billions of cosmic rays that have been hitting the Earth’s atmosphere for five billion years far exceed those we will create at the LHC, so by this logic time travellers should be here already.” As far as we know, they’re not.

Source: Telegraph.co.uk

Large Hadron Collider May Help Us Glimpse Into another Dimension

High energy collisions by the nearly-completed Large Hadron Collider (LHC) may be able to generate particles that are sensitive to dimensions beyond our four dimensional space-time. These exotic particles, called Kaluza-Klein gravitons, would be highly sensitive to the geometry of extra-dimensions, giving scientists an idea about what lies beyond our universe. If these particles are detected, and if their characteristics can be measured, then perhaps the extra dimensions predicted by string theory may be proven to exist…

How can you measure the size of a room without actually measuring it? Forget measuring the room, you can’t even see it! The room is invisible; it is outside your observational ability. But what if you could bounce sound off the walls? Even better, what if the walls of the invisible room were made up of resonant particles, producing their own sound? If the sound from these resonant particles could then be analyzed, the shape of the invisible room would be known.

According to string theory, there are many “invisible rooms” that we, as observers, cannot experience. We are confined to our three dimensions of space and one dimension of time (although this may not always be the case), otherwise known as four dimensional space-time. Elemental vibrating strings thread through our universe and predict that there may be six or seven extra dimensions coexisting. Although we cannot directly experience the dimensions beyond the normal four, can we measure the characteristics of string vibrations travelling from these extra dimensions into our observable universe?

In new research published by Gary Shiu, Bret Underwood, Kathryn Zurek at UW-Madison and Devin Walker at UC-Berkeley, quantum particles have been theorized to have the ability to resonate with dimensions beyond our universe; beyond the 4th dimension, considered to be time. From this resonance, signatures from extra-dimensions could pass through our four dimensional space-time to be measured. From this analysis, the “shape” of the extra dimensions may then be understood. This is not purely out of curiosity, according to string theory the shape of extra dimensions influences everything in our universe:

The shape of the dimensions is crucial because, in string theory, the way the string vibrates determines the pattern of particle masses and the forces that we feel.” – UW-Madison physics professor, Gary Shiu.

The team predict particles carrying extra-dimensional signatures could be generated by the Large Hadron Collider at CERN (nr. Geneva, Switzerland). At very high energies, Kaluza-Klein (KK) gravitons may be created for a brief moment, carrying the signatures with them. Unfortunately KK gravitons will decay very quickly, but from this decay a shower of lower energy, detectable particles will be created. By analyzing the resulting shower, a fingerprint of the KK particle’s signature may be constructed. Any slight changes in the geometry of the detected particles may indicate a particular dimension, and many signatures may be mixed, so complex computer simulations are required to understand the results coming from the LHC.

Source: Science Daily