Large Hadron Collider discovers new pentaquark particle

Source: http://www.bbc.co.uk/news/science-environment-33517492

An illustration of one possible layout of quarks in a pentaquark particle such as those seen at LHCb (showing five tightly-bonded quarks)

Scientists at the Large Hadron Collider have announced the discovery of a new particle called the pentaquark.

It was first predicted to exist in the 1960s but, much like the Higgs boson particle before it, the pentaquark eluded science for decades until its detection at the LHC.

The discovery, which amounts to a new form of matter, was made by the Hadron Collider’s LHCb experiment.

The findings have been submitted to the journal Physical Review Letters.

In 1964, two physicists – Murray Gell Mann and George Zweig – independently proposed the existence of the subatomic particles known as quarks.

They theorised that key properties of the particles known as baryons and mesons were best explained if they were in turn made up of other constituent particles. Zweig coined the term “aces” for the three new hypothesised building blocks, but it was Gell-Mann’s name “quark” that stuck.

This model also allowed for other quark states, such as the pentaquark. This purely theoretical particle was composed of four quarks and an antiquark (the anti-matter equivalent of an ordinary quark).

New states

During the mid-2000s, several teams claimed to have detected pentaquarks, but their discoveries were subsequently undermined by other experiments.

“There is quite a history with pentaquarks, which is also why we were very careful in putting this paper forward,” Patrick Koppenburg, physics co-ordinator for LHCb at Cern, told BBC News.

“It’s just the word ‘pentaquark’ which seems to be cursed somehow because there have been many discoveries that were then superseded by new results that showed that previous ones were actually fluctuations and not real signals.”

null
Scientists used precision measurements at the LHCb experiment to unmask the new pentaquark particle

Physicists studied the way a sub-atomic particle called Lambda b decayed – or transformed – into three other particles inside LHCb. The analysis revealed that intermediate states were sometimes involved in the production of the three particles.

These intermediate states have been named Pc(4450)+ and Pc(4380)+.

“We have examined all possibilities for these signals, and conclude that they can only be explained by pentaquark states,” said LHCb physicist Tomasz Skwarnicki of Syracuse University, US.

Previous experiments had measured only the so-called mass distribution where a statistical peak may appear against the background “noise” – the possible signature of a novel particle.

But the collider enabled researchers to look at the data from additional perspectives, namely the four angles defined by the different directions of travel taken by particles within LHCb.

“We are transforming this problem from a one-dimensional to a five dimensional one… we are able to describe everything that happens in the decay,” said Dr Koppenburg who first saw a signal begin to emerge in 2012.

“There is no way that what we see could be due to something else other than the addition of a new particle that was not observed before.”

null
An alternative layout for the pentaquark, showing a meson particle (one quark and one antiquark) and a baryon (three quarks) weakly bonded together
null
The first new data in two years began flowing from the LHC last month

LHCb spokesperson Guy Wilkinson commented: “The pentaquark is not just any new particle… It represents a way to aggregate quarks, namely the fundamental constituents of ordinary protons and neutrons, in a pattern that has never been observed before in over fifty years of experimental searches.

“Studying its properties may allow us to understand better how ordinary matter, the protons and neutrons from which we’re all made, is constituted.”

The LHC powered up again in April following a two-year shutdown to complete a programme of repairs and upgrades.

Should we build a village on the moon?

Source: http://www.bbc.com/future/story/20150712-should-we-build-a-village-on-the-moon

The new head of the European Space Agency has a plan – for humanity to build a ‘village on the Moon’. Richard Hollingham asks him why.

Professor Johann-Dietrich Woerner has been in his new job as Director General of the European Space Agency (Esa) for a week. In charge of a €4.4 billion annual budget, the former Chair of the German space agency is ultimately responsible for everything at Esa. Europe’s new observation, weather, communication and navigation satellites; astronauts on the International Space Station (ISS); missions to Mars, Mercury and Jupiter; and a sleepy lander on a duck-shaped comet all come under his remit.

When I ask him about his intentions for Esa, I expect a predictable and politically nuanced answer about the economic and social benefits of space or maybe the importance for science of exploring the unknown Universe. Instead, Woerner surprises me with a vision for a future of space exploration that is both ambitious and audacious.

A base on the far side of the Moon could house a telescope that could peer further into space (Credit: Science Photo Library)

“We should look to the future beyond the International Space Station,” he tells me. “We should look for a smaller spacecraft in low-Earth orbit for microgravity research and I propose a Moon village on the far side of the Moon.”

Yes, a village on the Moon.

A Moon village shouldn’t just mean some houses, a church and a town hall – Johann-Dietrich Woerner

Just the sort of daring vision that took Nasa from a standing start to the Moon in the 1960s, but today – possibly constrained by its political masters – the US space agency appears to be lacking ambition.

“A Moon village shouldn’t just mean some houses, a church and a town hall,” says Woerner. “This Moon village should mean partners from all over the world contributing to this community with robotic and astronaut missions and support communication satellites.”

3D printer

There are good reasons, he says, for going back to the Moon for science as well as using it a stepping-stone to further human exploration of the Solar System.

“The far side of the Moon is very interesting because we could have telescopes looking deep into the Universe, we could do lunar science on the Moon and the international aspect is very special,” he explains. “The Americans are looking to go to Mars very soon – and I don’t see how we can do that – before going to Mars we should test what we could do on Mars on the Moon.”

For example, Woerner suggests, the technology being investigated by Nasa to construct a Mars base using a giant 3D printer would be better tried out on the Moon first. Learning to live on an alien world is going to be tough – but the challenge would be a lot easier, particularly in an emergency, if the extraterrestrial community is only four days away from Earth rather than six months.

If a mission to Mars is to succeed, a Moon colony could be a valuable stepping stone (Credit: Science Photo Library)

Woerner envisages his Moon village as a multinational settlement involving astronauts, Russian cosmonauts and maybe even Chinese taikonauts. This would considerably extend the relatively limited number of nations involved in the ISS.

“We should have international cooperation, without any limitations, with any countries of the world,” says Woerner. “We have enough Earthly problems between different nations – space can bridge these Earthly problems and the Moon seems to be to be a good proposal.

Experience shows that there is no wall between exploration and practical applications

“Isolating a country is not the right way, a much better solution is to find ways to cooperate in space to strengthen ties between humans on Earth,” he adds, in what could be taken as a veiled criticism of America’s refusal to engage with the Chinese space programme. “If you think about an alien visiting the Earth and seeing what we are doing here, I’m not sure whether they would land.”

Moon in vogue?

Woerner has a robust response for those who criticise money spent on space exploration and astronomical research.

“Experience shows that there is no wall between exploration and practical applications,” he says. “Look at the greenhouse effect – everyone knows what it is and we use satellites to investigate it – but this was not discovered on Earth, it was discovered by an exploration mission to Venus.”

Right now the Moon village idea is just that; an idea, a proposal. No nation or agency has committed any money or mapped out the concept in any detail.

There is, however, growing interest in returning to the Moon. When, for instance, BBC Future recently asked experts to predict the next decade of space exploration they all cited the Moon as a destination of choice.

Nasa appears to have shelved further ambitions to go to the Moon, at least for the time being (Credit: Nasa/Science Photo Library)

Woerner says he is voicing the idea of a Moon village to encourage discussion about the future of space research, exploration and the applications of space technology. “I will be very happy if someone else has a better idea,” he tells me.

Nevertheless, as one of the world’s most senior and powerful space figures, Woerner’s proposal will be taken seriously. Nasa is still vague about where it plans to fly its new Orion spacecraft – fitted incidentally with an Esa service module – and the Moon would seem to be a suitably inspirational destination.

“In our genes there is something beyond just practical applications,” Woerner says. “We like to discover, to pioneer – this is humankind and this is what brings us into the future.”

150 Years After Maxwell, Scientists Discover Fundamental Property of Light

Source: http://bigthink.com/ideafeed/150-years-after-maxwell-scientists-discover-fundamental-property-of-light

Shutterstock_183051467_cropped

We’ve written quotes about it. The Bible makes grand claims about God creating it. We think we know everything that there is to know about it — until now. Scientists have just uncovered a new fundamental property of light that sheds light on the 150-year-old classical theory of electromagnetism. This could lead to some interesting new applications for manipulating light at the nanoscale.

As it is super unusual for a pure-theory physics paper to make it into the journal Science, it is definitely worth a second glance. In the new study, researchers explore the connections to the following theories: James Clerk Maxwell’s famous theory of light, the quantum spin Halleffect, and topological insulators.

Seems like a whole lot of hard stuff to swallow, without a chaser. To understand how all of this works, let’s begin by considering the behaviour of electrons in the quantum spin Hall effect. Electrons possess an intrinsic spin, constantly rotating about their axis. This spin is a quantum-mechanical property. There are, however, special rules that apply: The electron only has two options open to it. It can either spin clockwise or anti-clockwise; spin-up or spin-down, respectively. Although the magnitude of the spin is always fixed.

The spin of the electron, in certain materials, can have a big effect on the way that electrons move. This effect is called the “spin-orbit coupling.” This is kind of like in soccer, when you freekick the ball with a spin, the soccer player can deviate the ball either to the left or the right as it travels through the air. The direction of movement of the ball, thereby, would depend on the way in which the ball spins.

While a normal electrical current consists of an equal mixture of moving spin-up and spin-down electrons, due to the spin-orbit effect, spin-up electrons will be deflected one way, while spin-down electrons will be deflected the other. The deflected electrons will reach the edges of the material and be able to travel no farther. The spin-orbit coupling thus leads to an accumulation of electrons with different spins on opposite sides of the sample.

This effect is known as the classical spin Hall effect, and quantum mechanics transforms it in a completely unique way. The quantum-mechanical wave nature of the travelling electrons organises them into neat channels along the edges of the sample. In the bulk of the material, there is no net spin. But at each edge, there form exactly two electron-carrying channels, one for spin-up electrons and one for spin-down. These edge channels possess a further remarkable property: The electrons that move in them are impervious to the disorder and imperfections that usually cause resistance and energy loss.

This precise ordering of the electrons into spin-separated, perfectly conducting channels is known as the quantum spin Hall effect. This is a classic example of a “topological insulator.” A topological insulator is a material that is an electrical insulator on the inside, but that can conduct electricity on its surface. Such materials represent a fundamentally distinct organisation of matter and promise much in the way of spintronic applications.

Explanation time over. Time to focus on the new study. The new study suggests that the seeds of this spin Hall effect are actually all around us. So, it focused on light.

In Maxwell’s theory, light is an electromagnetic wave. This means that light travels as a synchronised oscillation of electric and magnetic fields. The new research studies the way in which these fields rotate as the wave propagates. The researchers were able to define a property of the wave, the “transverse spin,” that plays the role of the electron spin in the quantum spin Hall effect.

This spin is exactly zero in any homogeneous atmosphere — such as air. But, at the interface between two media (air and gold, for example), the character of the waves change dramatically. A transverse spin, therefore, develops. The direction of this spin is precisely locked to the direction of travel of the light wave at the interface. Thus, when viewed in the correct way, we see that the basic topological ingredients of the quantum spin Hall effect that we know for electrons are shared by light waves.

Understanding the spin-orbit effect could create new possibilities for controlling light at the nanoscale. Optical connections, for example, are seen as a way of increasing computer performance, and in this context, the spin-orbit effect could be used to rapidly reroute optical signals based on their spin. With applications proposed in optical communications, metrology, and quantum information processing, the future of this theory could be quite fascinating to say the least.

New Horizons Color Images Reveal Two Distinct Faces of Pluto, Series of Spots that Fascinate

New color images from NASA’s New Horizons spacecraft show two very different faces of the mysterious dwarf planet, one with a series of intriguing spots along the equator that are evenly spaced. Each of the spots is about 300 miles (480 kilometers) in diameter, with a surface area that’s roughly the size of the state of Missouri.

Scientists have yet to see anything quite like the dark spots; their presence has piqued the interest of the New Horizons science team, due to the remarkable consistency in their spacing and size. While the origin of the spots is a mystery for now, the answer may be revealed as the spacecraft continues its approach to the mysterious dwarf planet. “It’s a real puzzle—we don’t know what the spots are, and we can’t wait to find out,” said New Horizons principal investigator Alan Stern of the Southwest Research Institute, Boulder. “Also puzzling is the longstanding and dramatic difference in the colors and appearance of Pluto compared to its darker and grayer moon Charon.”

New Horizons team members combined black-and-white images of Pluto and Charon from the spacecraft’s Long-Range Reconnaissance Imager (LORRI) with lower-resolution color data from the Ralph instrument to produce these views. We see the planet and its largest moon in approximately true color, that is, the way they would appear if you were riding on the New Horizons spacecraft. About half of Pluto is imaged, which means features shown near the bottom of the dwarf planet are at approximately at the equatorial line.

Pluto shows two remarkably different sides in these color images of the planet and its largest moon Charon taken by New Horizons
Pluto shows two remarkably different sides in these color images of the planet and its largest moon Charon taken by New Horizons
New color images from NASA’s New Horizons spacecraft show two very different faces of the mysterious dwarf planet, one with a series of intriguing spots along the equator that are evenly spaced.

More New Horizons News for Wednesday, July 1:Instruments Prepare to Search for Clouds in Pluto’s Atmosphere

If Pluto has clouds, New Horizons can detect them. Both the high-resolution LORRI imager and the Ralph color imager will be used to look for clouds across the face of Pluto during its approach and departure from the planet. “We’re looking for clouds in our images using a number of techniques,” said science team postdoc Kelsi Singer of the Southwest Research Institute, “If we find clouds, their presence will allow us to track the speeds and directions of Pluto’s winds.”

An artist’s conception of clouds in Pluto’s atmosphere.
An artist’s conception of clouds in Pluto’s atmosphere.
Credits: JHUAPL

New Horizons Team Says “Bravo!” To Earth-Based Pluto ObserversFor more than two decades, planetary scientists have raced to get a spacecraft to Pluto against predictions that its atmosphere would disappear—literally freezing onto the surface—before it could be explored. This week, planetary scientists using ground-based telescopes and NASA’s SOFIA airborne observatory confirmed that “Pluto’s atmosphere is alive and well, and has not frozen out on the surface,” according to New Horizons deputy project scientist Leslie Young, Southwest Research Institute, Boulder. Added Young, “We’re delighted!”

“The SOFIA observations will also be essential for linking ground-based studies to the results from the New Horizons Pluto encounter for decades to come”, said Cathy Olkin, Southwest Research Institute, Boulder, co-investigator on NASA’s New Horizons mission.

SOFIA Aircraft
Credits: NASA-Jim Ross

PEPSSI Instrument Tastes Pluto’s AtmosphereThe Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) instrument aboard New Horizons is sending back data daily, sampling the space environment near Pluto. PEPSSI is designed to detect ions (atoms that have lost or gained one or more electrons) that have escaped from Pluto’s atmosphere. As they depart, these atoms become caught up in the solar wind, the stream of subatomic particles that emanates from the Sun. PEPSSI’s job is to tell scientists about the composition of Pluto’s escaping atmosphere and how quickly the atmosphere is escaping.

The location of New Horizons’ Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) instrument is shown.
The location of New Horizons’ Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI) instrument is shown.

New Horizons is now less than 9.5 million miles (15 million kilometers) from the Pluto system. The spacecraft is healthy and all systems are operating normally.

The Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, designed, built, and operates the New Horizons spacecraft, and manages the mission for NASA’s Science Mission Directorate. The Southwest Research Institute, based in San Antonio, leads the science team, payload operations and encounter science planning. New Horizons is part of the New Frontiers Program managed by NASA’s Marshall Space Flight Center in Huntsville, Alabama.

To view images from New Horizons and learn more about the mission visit: http://www.nasa.gov/newhorizons and http://pluto.jhuapl.edu

Follow the New Horizons mission on social media, and use the hashtag #PlutoFlyby to join the conversation. The mission’s official NASA Twitter account is @NASANewHorizons. Live updates are available on Facebook at: https://www.facebook.com/new.horizons1

Big Bang May Have Created a Mirror Universe Where Time Runs Backwards

Source: http://www.pbs.org/wgbh/nova/next/physics/big-bang-may-created-mirror-universe-time-runs-backwards/

Why does time seem to move forward? It’s a riddle that’s puzzled physicists for well over a century, and they’ve come up with numerous theories to explain time’s arrow. The latest, though, suggests that while time moves forward in our universe, it may run backwards in another, mirror universe that was created on the “other side” of the Big Bang.

Two leading theories propose to explain the direction of time by way of the relatively uniform conditions of the Big Bang. At the very start, what is now the universe was homogeneously hot, so much so that matter didn’t really exist. It was all just a superheated soup. But as the universe expanded and cooled, stars, galaxies, planets, and other celestial bodies formed, birthing the universe’s irregular structure and raising its entropy.

big-bang-backwards

In a mirror universe, from our perspective, time may run backwards from the Big Bang.

One theory, proposed in 2004 by Sean Carroll, now a professor at Caltech, and Jennifer Chen, then his graduate student, says that time moves forward because of the contrast in entropy between then and now, with an emphasis on the fact that the future universe will so much more disordered than the past. That movement toward high entropy gives time its direction.

The new theory says a low entropy early universe is inevitable because of gravity, and ultimately that’s what gives time its arrow. To test the idea, the theory’s proponents assembled a simple model with nothing more than 1,000 particles and the physics of Newtonian gravity. Here’s Lee Billings, reporting for Scientific American:

The system’s complexity is at its lowest when all the particles come together in a densely packed cloud, a state of minimum size and maximum uniformity roughly analogous to the big bang. The team’s analysis showed that essentially every configuration of particles, regardless of their number and scale, would evolve into this low-complexity state. Thus, the sheer force of gravity sets the stage for the system’s expansion and the origin of time’s arrow, all without any delicate fine-tuning to first establish a low-entropy initial condition.

But here’s the twist: The expansion after the simulated Big Bang didn’t just happen in one direction, but two. The simple Big Bang they modeled produced two universes, one a mirror of the other. In one universe, time appears to run forwards. In the other, time runs backwards, at least from our perspective.

Here’s Billings again, interviewing lead author Julian Barbour from the University of Oxford:

“If they were complicated enough, both sides could sustain observers who would perceive time going in opposite directions. Any intelligent beings there would define their arrow of time as moving away from this central state. They would think we now live in their deepest past.”

From that perspective, maybe George Lucas’s Star Wars didn’t take place a long time ago in a galaxy far, far away, but in the far future—our deepest past—of our mirror universe.

Scientists discover fundamental property of light – 150 years after Maxwell

Source: http://theconversation.com/scientists-discover-fundamental-property-of-light-150-years-after-maxwell-43928

Light plays a vital role in our everyday lives and technologies based on light are all around us. So we might expect that our understanding of light is pretty settled. But scientists have just uncovered a new fundamental property of light that gives new insight into the 150-year-old classical theory of electromagnetism and which could lead to applications manipulating light at the nanoscale.

It is unusual for a pure-theory physics paper to make it into the journal Science. So when one does, it’s worth a closer look. In the new study, researchers bring together one of physics’ most venerable set of equations – those of James Clerk’s Maxwell’s famous theory of light – with one of the hot topics in modern solid-state physics: the quantum spin Hall effect and topological insulators.

To understand what the fuss is about, let’s first consider the behaviour of electrons in the quantum spin Hall effect. Electrons possess an intrinsic spin as if they were tiny spinning-tops, constantly rotating about their axis. This spin is a quantum-mechanical property, however, and special rules apply – the electron has only two options open to it: it can either spin clockwise or anticlockwise (conventionally called spin-up or spin-down), but the magnitude of the spin is always fixed.

In certain materials, the spin of the electron can have a big effect on the way electrons move. This effect is called “spin-orbit coupling” and we can get an idea of how it works with a footballing analogy. By hitting a freekick with spin, a footballer can make the ball deviate to the left or the right as it travels through the air. The direction of the movement depends on which way the ball is spinning.

Bend it like Beckham. Ronnie Macdonald/Flickr, CC BY-SA

Spin-orbit coupling causes electrons to experience an analogous spin-dependent deflection as they travel, although the effect arises not from the Magnus effect as in the case for the football, but from electric fields within the material.

A normal electrical current consists of an equal mixture of moving spin-up and spin-down electrons. Due to the spin-orbit effect, spin-up electrons will be deflected one way, while spin-down electrons will be deflected the other. Eventually the deflected electrons will reach the edges of the material and be able to travel no further. The spin-orbit coupling thus leads to an accumulation of electrons with different spins on opposite sides of the sample.

This effect is known as the classical spin Hall effect, and quantum mechanics adds a dramatic twist on top. The quantum-mechanical wave nature of the travelling electrons organises them into neat channels along the edges of the sample. In the bulk of the material, there is no net spin. But at each edge, there form exactly two electron-carrying channels, one for spin-up electrons and one for spin-down. These edge channels possess a further remarkable property: the electrons that move in them are impervious to the disorder and imperfections that usually cause resistance and energy loss.

This precise ordering of the electrons into spin-separated, perfectly conducting channels is known as the quantum spin Hall effect, which is a classic example of a “topological insulator”– a material that is an electrical insulator on the inside but that can conduct electricity on its surface. Such materials represent a fundamentally distinct organisation of matter and promise much in the way of spintronic applications. Read heads of hard drives based on this technology are currently used in industry.

Beginning to see the light

Now, the new study suggests that the seeds of this seemingly exotic quantum spin Hall effect are actually all around us. And it is not to electrons that we should look to find them, but rather to light itself.

In modern physics, matter can be described either as a wave or a particle. In Maxwell’s theory, light is an electromagnetic wave. This means it travels as a synchronised oscillation of electric and magnetic fields. By considering the way in which these fields rotate as the wave propagates, the researchers were able to define a property of the wave, the “transverse spin”, that plays the role of the electron spin in the quantum spin Hall effect.

In a homogeneous medium, like air, this spin is exactly zero. However, at the interface between two media (air and gold, for example), the character of the waves change dramatically and a transverse spin develops. Furthermore, the direction of this spin is precisely locked to the direction of travel of the light wave at the interface. Thus, when viewed in the correct way, we see that the basic topological ingredients of the quantum spin Hall effect that we know for electrons are shared by light waves.

This is important because there has been an array of high-profile experiments demonstrating coupling between the spin of light and its direction of propagation at surfaces. This new work gives a integrative interpretation of these experiments as revealing light’s intrinsic quantum spin Hall effect. It also points to a certain universality in the behaviour of waves at surfaces, be they quantum-mechanical electron waves or Maxwell’s classical waves of light.

Harnessing the spin-orbit effect will open new possibilities for controlling light at the nanoscale. Optical connections, for example, are seen as a way of increasing computer performance, and in this context, the spin-orbit effect could be used to rapidly reroute optical signals based on their spin. With applications proposed in optical communications, metrology, and quantum information processing, it will be interesting to see how the impact of this new twist on an old theory unfolds.

How to overthrow a Martian dictatorship

Source: http://www.bbc.com/future/story/20150619-how-to-overthrow-a-martian-dictatorship

The governments we create on other worlds might turn nasty. Richard Hollingham meets a group plotting revolution in space.

Two short blocks from the London headquarters of Britain’s security service, MI6, a group of 30 men and women is plotting to overthrow the government.

Not – and I should make this abundantly clear for any spooks reading this – the British government, nor any government on Earth, but a tyrannical administration on an alien world in the future.

This is not a game. The scientists, engineers, social scientists, philosophers and writers gathered at the British Interplanetary Society in London are taking their task seriously – studying, with academic rigour, the problem of toppling despotic extraterrestrial regimes.

We’ve got a chance to think about what the problems might be in outer space before we go there – Charles Cockell

This is the third annual conference on extraterrestrial liberty. Last year the event tackled the challenge ofwriting a constitution for an alien settlement, concluding that successful space colonies should base laws and liberties on the US Constitution and Bill of Rights.

“This year we’re discussing what happens if you don’t like the government you’ve created and want to overthrow it,” says conference organiser Charles Cockell, a professor of astrobiology at the University of Edinburgh.

Conclusions from these meetings will be published as essays, designed to serve as manuals for future spacefarers.

A space colony’s government will be in charge of all the resources needed to maintain life (Credit: Science Photo Library)

“We hope the discussions we have will constitute the first ideas on extraterrestrial liberty,” Cockell says. “We’ve got a chance to think about what the problems might be in outer space before we go there.”

The scenarios the group is contemplating are easiest to imagine if you think about what a space colony might be like. Perhaps a domed settlement with a few hundred residents, beneath a thin dusty Martian sky. A fragile and isolated outpost of humanity 225 million kilometres from the home world. With a brutal dictator and his cronies in charge of the oxygen generators, for instance.

Non-violent opposition

“Say, for example, you don’t like your government and you resort to revolution,” says Cockell. “Someone goes and smashes up the habitat, destroys the windows and instantly the place is depressurised, the oxygen is lost and everyone dies.

“The consequences of violence in space could be much more catastrophic than on Earth,” he warns, “So how do you dissent in an environment in which violent disobedience might kill everyone?”

The answer lies, Cockell believes, in preventing dictatorships emerging in the first place. This would be achieved by building non-violent means of opposition to government into the rulebook, perhaps through organised labour systems – similar to unions on Earth – or by holding the leadership to account through journalism and media.

In space, private corporations could be just as ruthless and despotic as the worst governments

“Once you stop a free press in an extraterrestrial environment, you’re actually in deep trouble,” he says.

The physical structure of the settlements could also be designed to minimise the effects of conflict, with air, water and power systems in multiple locations. Not only would this reduce vulnerabilities to a break down or failure but it would avoid the dangers of a central point of control.

However, even with a free press or organised unions there are issues in space that do not arise on Earth – particularly when companies are involved.

Space colonists may have to come up with non-violent ways to dissuade autocratic rule (Credit: Science Photo Library)

“As we know private corporations can be just as ruthless and despotic as the worst governments,” says Cockell. “If you strike, then maybe the corporation says ‘that’s fine – let me show you to the airlock and you can leave’ and off you go into the vacuum of space.”

And while freedoms, liberties and labour laws have evolved on Earth – at least in democratic nations – they may need to be adapted before anyone settles elsewhere. Space is a unique environment and there is a balance to be struck between slavery and total freedom. Opting out is not an option. A Martian colony that is so libertarian that everyone sits around doing nothing all day is unlikely to survive for long.

“We need to arrive at a balance between a society that maximises civil liberties but also maximises the potential for people to survive the lethal conditions of space,” says Cockell.

Sci-fi signposts

Although this may be one of the few times that academics have formally contemplated the challenges of off-planet living, science fiction writers have been thinking about it for decades.

One of the British Interplanetary Society’s most famous members is Arthur C Clarke and conference delegates include one of today’s best-known and acclaimed sci-fi writers,Stephen Baxter.

Baxter’s 2010 novel Ark, for instance, features a starship on a multi-generational mission to a distant new world where precisely the issues of governance arise. “You have a group of young very competitive candidates applying to get on this thing,” explains Baxter, “and then they find they’re stuck there.”

Off-Earth colonies will have to find a balance between liberty and a society where everyone contributes (Credit: Science Photo Library)

“At first it’s military discipline, then they go for a consensual government but that breaks down and a dictator takes over because he gets hold of the water supply – very relevant to this discussion,” Baxter says. “You also have a middle generation who are going to live and die on the ship and they evolve a rebellious teenage culture.” Some people do not even believe they are on a spaceship but in some sort of prison or social experiment.

“Evolving a society inside a box is a fascinating area to think about,” says Baxter. “Sci-fi writers are always thinking one step beyond, it’s a great bed of thought experiments.”

The more you anticipate, the more chance you have to get it right – Stephen Baxter

In fact one of the first known books on lunar revolution, The Birth of a New Republic, was written by sci-fi author Jack Williamson in the 1930s. The novel explores tensions within society and between the Moon and Earth. Robert Heinlein’s 1966 novel, The Moon is a Harsh Mistress, even explores the idea of a prison colony on the Moon with a despotic prison warder who controls the air supply.

For Baxter, the conference helps shift these sci-fi ideas into practical reality.

“The more you anticipate, the more chance you have to get it right,” he says. “It’s not that far away before we have long-term missions away from the Earth and we have to look at the psychology of people in enclosed environments and construct a civilisation on this basis.”

In the 1930s a colony on the Moon was a distant dream. Even in 1966 humanity was three years away from that first step. A long duration mission could happen during our lifetime. If it is to succeed and humans are to successfully colonise new worlds, we need to be prepared.

All Space Colonies Will Begin as Dictatorships

Source: http://bigthink.com/ideafeed/will-all-space-colonies-need-to-begin-as-dictatorships

America is a land of plenty, but an American colony on Mars, which NASA scientists and Elon Musk’s SpaceX hope to begin by 2030, would be anything but. A scarcity of crucial resources like water and air, and the high stakes of even temporarily running out, suggest that any Martian government would function as a military dictatorship.

That poses serious challenges to maintaining the liberties we prize here on Earth. So to answer that challenge, the British Interplanetary Society recently convened in London to imagine what a free and democratic Martian colony would look like.

In order for the colony to survive, violent revolution would be best avoided at all costs, as conference organiser Charles Cockell, professor of astrobiology at the University of Edinburgh, explained:

“Say, for example, you don’t like your government and you resort to revolution,” says Cockell. “Someone goes and smashes up the habitat, destroys the windows and instantly the place is depressurised, the oxygen is lost, and everyone dies.

The consequences of violence in space could be much more catastrophic than on Earth, so how do you dissent in an environment in which violent disobedience might kill everyone?”

The conference ultimately recommended founding documents based on the American Constitution and Bill of Rights, believing that free expression of thought and democratic principles are the best combination of rule and freedom to secure peace.

One freedom we currently enjoy, the freedom to opt-out of society, would likely not be possible on Mars. Everyone’s efforts and skills would be needed for the group to survive, and hoarding resources for one’s private use would soon spell death.

Former NASA astronaut Ron Garan argues that a moon base would be the best way to create space-based societies. If we get our footing closer to home, a Martian venture would be more successful.

Rosetta and Philae: Searching for a good signal

Source: http://blogs.esa.int/rosetta/2015/06/26/rosetta-and-philae-searching-for-a-good-signal/

After seven months in hibernation on the surface of Comet 67P/Churyumov-Gerasimenko, Rosetta’s lander Philae communicated with Earth via the orbiter on 13 June. Since then, seven periods of connection have been confirmed between the orbiter and lander, but all have been intermittent. One of the key issues being worked on is to adjust Rosetta’s trajectory to see whether a more reliable communications link can be established with Philae. This report describes the status of those efforts as of 26 June, and has been prepared with inputs from ESA’s Rosetta Science Ground Segment team at ESAC and the flight control team at ESOC, along with the Lander Control Centre at DLR.

Credits: ESA-C.Carreau

When have contacts been made?
Confirmed contacts between Rosetta and Philae have been made on 13, 14, 19, 20, 21, 23, and 24 June, but were intermittent during those contact periods. For example, the contact on 19 June was stable but split into two short periods of two minutes each. Conversely, the contact on 24 June started at 17:20 UT (on board Rosetta) and ran for 20 minutes, but the quality of the link was very patchy and only about 80 packets of telemetry were received. Prior to this, on Tuesday, 23 June, there was a 20-second contact, but no stable link was established and consequently no telemetry data were received.

How frequently do Rosetta and Philae try to make contact?
Comet 67P/C-G rotates with a 12.4 hour period and thus Philae’s location is not always visible to Rosetta. Roughly speaking, there are two opportunities for contact between the two spacecraft each Earth day, but their duration depends on the orientation of the transmitting antenna on Philae and the location of Rosetta along its trajectory around the comet. Similarly, as the comet rotates, Philae is not always in sunlight and thus not always generating enough power via its solar panels to receive and transmit signals. At the moment, the predicted contact windows vary between a few tens of minutes and up to three hours. During these contact windows, the ideal situation would be that a powered-up Philae hears Rosetta’s calling signal and responds by establishing a link back to the orbiter, then transmitting the data stored on-board via that link.

Why do we care about a stable connection?
Data are stored in two mass memories on-board Philae, and in order to download the data in the most efficient way possible, a stable ‘call’ duration of about 50 minutes is desired. It can take around 20 minutes for the data to be dumped from each one to the Rosetta orbiter, and additional time is needed to confirm that a stable link has been acquired in the first place, and also for uploading new commands.

Artist impression of Philae on the surface of 67P/C-G. Credit: ESA/ATG medialab

Can the lander still be operated with short communications links?
Yes, but this situation is not ideal because it has an impact on the overall time available to perform scientific operations in the long term. That’s because each time a new science sequence was initiated, it would take longer to get the accumulated science data back and free up on-board storage before new commands could be uploaded and subsequently executed.


What might be affecting the link from the lander’s point of view?
A number of factors regarding the lander’s current status may contribute to the quality of the communications links observed so far. These include:

  • Lander power availability: the orbiter needs to be flying overhead the lander’s position when the lander is ‘awake’, that is, when it is generating enough power to have its receivers and transmitter switched on.
  • Lander location and orientation: the orientation of the lander on the surface of the comet determines how its antenna pattern is projected into space, and the rugged topography immediately surrounding Philae can also distort that antenna pattern.
  • Lander health status: errors in the various on-board units could also affect the chances of making a stable link.

What might be affecting the link from the orbiter’s point of view?
Equivalently, there are a number of parameters related to the orbiter that could be influencing the quality of the communications link observed so far:

  • Distance to the comet: the strength of the signal received by the orbiter diminishes as the square of the distance between the orbiter and the lander, and thus the chances of a stable link are reduced if Rosetta is too far from the comet.
  • Trajectory of the orbiter: to make a link, the antenna pattern of the lander must overlap with that of the orbiter, and given the constraints set by the lander antenna pattern, certain trajectories of the orbiter around the comet will be more effective at seeing a ‘clean’ lander signal than others.
  • Pointing of the orbiter: the exact orientation of the orbiter in space plays a role, because if the dedicated, non-steerable antenna used to communicate with the lander is not pointed directly at the comet, the strength of the signal received from the lander will be reduced. Some science observations being made by Rosetta require the orbiter to be pointed off the nucleus, but steps are being taken to avoid that situation during potential contacts with Philae.

Can any of these factors be changed?
Until a stable link is achieved between the orbiter and the lander and new commands uploaded, it is obviously not possible to try and ‘tune’ the parameters on-board the lander. Thus, present efforts are focused on improving the factors related to the orbiter. However, this is not straightforward, as the spacecraft operations team must keep the safety of the orbiter as their highest priority at a time when the comet is becoming more and more active.

Processed NAVCAM image of Comet 67P/C-G taken on 15 June 2015. Credits: ESA/Rosetta/NAVCAM – CC BY-SA IGO 3.0

How close can Rosetta get to the comet and still remain safe?
In order to navigate around the comet, Rosetta uses its star trackers to determine its orientation in space, and thus keep its instruments and high gain antenna pointed in the right directions. However, in the dusty environment of a comet, individual dust particles can mimic stars, making it difficult for the star trackers to operate effectively. If the star trackers are unable to determine the spacecraft’s orientation, it will go into safe mode, as experienced during one of the March close fly-bys of the comet. In the worst case, contact with Earth may be lost, which would lead to the spacecraft entering an autonomous mode that could take days or weeks to recover from.

The increasingly-active environment of Comet 67P/C-G is proving to be dustier than planned for when Rosetta was built and thus since March, the spacecraft has been flying at safer distances of roughly 200 km from the comet to avoid similar issues occurring. It has also been moved into a so-called ‘terminator trajectory’ around the comet, also aimed at reducing the impact of the dusty environment on the star trackers.

The spacecraft operations team are slowly edging Rosetta closer to the comet in this terminator trajectory, closely monitoring the performance of the star trackers in ‘continuous tracking mode’, and planning the trajectory for the days ahead. At the moment, Rosetta is following a trajectory scheme that allows it to come as close as 165 km from the comet, and there are signs that dust interference is becoming an issue again. A manoeuvre planned for Saturday morning will move the spacecraft to 160 km by 30 June and the team will assess the star tracker performance at that time in order to determine if even closer orbits are possible, or if Rosetta needs to be moved further away again.

How is the trajectory being changed?
The current terminator trajectory of Rosetta has it flying over the boundary between comet day and night. The main change that can be made within this scheme is to the latitude of the ground track of the orbit on the surface of the comet. This is currently being stepped down from +55 degrees (on 24 June) to –8 degrees (on 26 June), with a better quality signal between Rosetta and Philae being detected at lower latitudes. For comparison, just after landing in November 2014, Rosetta was flying over latitudes of +15 to +25 degrees. In the coming week, the latitudes will slowly be stepped back up again from –8 to +50 degrees, with a careful assessment being made of signal strength at low latitudes again.

How long is it going to take to resolve the situation?
This is a very dynamic, real-time process, and thus it is hard to predict when a stable link might be made between Rosetta and Philae. The mission teams are working on a short-term trajectory planning schedule, which is updated every Monday and Thursday morning. Changes to Rosetta’s trajectory are made depending on the latest information with regards lander communications and the performance of the orbiter’s star trackers in the days between each decision point. In addition, representatives from ESA’s Rosetta team, the Lander Control Centre at DLR in Cologne, and the Lander Science Operations and Navigation Centre at CNES, Toulouse discuss daily the latest status of any lander communication events.

Visit CERN sites new to Google Street View

Source: http://home.web.cern.ch/about/updates/2015/06/visit-cern-sites-new-google-street-view

https://www.google.com/maps/@46.233964,6.056625,3a,75y,264.81h,96.77t/data=!3m6!1e1!3m4!1sNy0OR587lg2ogm2JXscnog!2e0!7i13312!8i6656

Link to view: https://www.google.com/maps/@46.233964,6.056625,3a,75y,288.51h,94.5t/data=!3m6!1e1!3m4!1sNy0OR587lg2ogm2JXscnog!2e0!7i13312!8i6656

https://www.google.com/maps/@46.23258,6.047789,3a,75y,23.27h,93.73t/data=!3m7!1e1!3m5!1seWymMV0gcOAtt8IyRcqU6Q!2e0!3e5!7i13312!8i6656?hl=en-US

Link to view: https://www.google.com/maps/@46.23258,6.047789,3a,75y,23.27h,93.73t/data=!3m7!1e1!3m5!1seWymMV0gcOAtt8IyRcqU6Q!2e0!3e5!7i13312!8i6656?hl=en-US

NASA spies 3-mile-tall ‘pyramid,’ more bright spots on Ceres

Source: http://www.cnet.com/uk/news/3-mile-tall-pyramid-more-bright-spots-spied-on-ceres/

Dwarf planet Ceres gets weirder as NASA’s Dawn spacecraft gets closer. Check out the latest shots from the cosmic paparazzi.

The reflective area around the crater in the top right of the image is what the Dawn team calls “spot 1.”NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Ancient astronaut alert! More weird features have been spotted by NASA’s Dawn spacecraft on Ceres, a dwarf planet and the largest object in the asteroid belt between Mars and Jupiter.

As Dawn first made its approach on Ceres earlier this year, it caught sight of large, bright and mysterious reflective spots in a crater on the big rock, which is also believed to contain quite a bit ofwater, ice and/or mud in its interior.

Now orbiting at a near altitude of just 2,700 miles, those big spots remain a mystery (the leading guessis still reflective patches of ice or salts), but Dawn is also beginning to pick out other bright spots and an odd pyramid-shaped peak that NASA estimates to be three miles tall, which would put it higher than any of the Rocky Mountains. The image with the peak was taken on June 6 and released Wednesday.

pia19574.jpg
You’re pretty far from ancient Egypt, son…NASA/JPL-Caltech/UCLA/MPS/DLR/IDA

Plenty of observers have suggested — with varying degrees of seriousness — that the bright lights on Ceres could be evidence of current or past alien occupation of the dwarf planet. The discovery of a mountain-sized pyramid feature must have some cable channels looking into how much it would cost to get their own film crew to Ceres.

Perhaps the ancient Egyptians and ancient Indian astronauts had a celestial joint venture of sorts going on back in the day? Or perhaps Ceres is more geologically interesting both today and in the past than what meets the eye.

Rosetta’s lander Philae wakes up from hibernation

Source: http://blogs.esa.int/rosetta/2015/06/14/rosettas-lander-philae-wakes-up-from-hibernation/

Rosetta’s lander Philae is out of hibernation!

The signals were received at ESA’s European Space Operations Centre in Darmstadt at 22:28 CEST on 13 June. More than 300 data packets have been analysed by the teams at the Lander Control Center at the German Aerospace Center (DLR).

“Philae is doing very well: It has an operating temperature of -35ºC and has 24 Watts available,” explains DLR Philae Project Manager Dr. Stephan Ulamec. “The lander is ready for operations.”

For 85 seconds Philae “spoke” with its team on ground, via Rosetta, in the first contact since going into hibernation in November.

When analysing the status data it became clear that Philae also must have been awake earlier: “We have also received historical data – so far, however, the lander had not been able to contact us earlier.”

Now the scientists are waiting for the next contact.  There are still more than 8000 data packets in Philae’s mass memory which will give the DLR team information on what happened to the lander in the past few days on Comet 67P/Churyumov-Gerasimenko.

Philae shut down on 15 November 2015 at 1:15 CET after being in operation on the comet for about 60 hours. Since 12 March 2015 the communication unit on orbiter Rosetta was turned on to listen out for the lander.

More information when we have it!

Rosetta is an ESA mission with contributions from its Member States and NASA. Rosetta’s Philae lander is contributed by a consortium led by DLR, MPS, CNES and ASI.


More on this article: http://www.bbc.co.uk/news/science-environment-33126885

A practical guide to countering science denial

Source: http://phys.org/news/2015-06-countering-science-denial.html
A practical guide to countering science denial
Science denial can come in many forms, but you need to be careful when debunking it. Credit: Bryan Rosengrant/Flickr, CC BY-ND

It should go without saying that science should dictate how we respond to science denial. So what does scientific research tell us?

One effective way to reduce the influence of science denial is through “inoculation”: you can build resistance to by exposing people to a weak form of the misinformation.

How do we practically achieve that? There are two key elements to refuting misinformation. The first half of a debunking is offering a factual alternative. To understand what I mean by this, you need to understand what happens in a person’s mind when you correct a misconception.

People build mental models of how the world works, where all the different parts of the model fit together like cogs. Imagine one of those cogs is a myth. When you explain that the myth is false, you pluck out that cog, leaving a gap in their mental model.

A practical guide to countering science denial
Debunking myths creates gaps in people’s mental models. That gap needs to be filled with an alternative fact. Credit: John Cook, Author provided.

But people feel uncomfortable with an incomplete model. They want to feel as if they know what’s going on. So if you create a gap, you need to fill the gap with an alternative fact.

For example, it’s not enough to just provide evidence that a suspect in a murder trial is innocent. To prove them innocent – at least in people’s minds – you need to provide an alternative suspect.

However, it’s not enough to simply explain the facts. The golden rule of debunking, from the book Made To Stick, by Chip and Dan Heath, is to fight sticky myths with even stickier facts. So you need to make your science sticky, meaning simple, concrete messages that grab attention and stick in the memory.

How do you make science sticky? Chip and Dan Heath suggest the acronym SUCCES to summarise the characteristics of sticky science:

Simple: To paraphrase a quote from Nobel prize winner Ernest Rutherford: if you can’t explain your physics simply, it’s probably not very good physics.

Unexpected: If your science is counter-intuitive, embrace it! Use the unexpectedness to take people by surprise.

Credible: Ideally, source your information from the most credible source of information available: peer-reviewed .

Concrete: One of the most powerful tools to make abstract science concrete is analogies or metaphors.

Emotional: Scientists are trained to remove emotion from their science. However, even scientists are human and it can be quite powerful when we express our passion for science or communicate how our results affect us personally.

Stories: Shape your science into a compelling narrative.


 Mythbusting

Let’s say you’ve put in the hard yards and shaped your science into a simple, concrete, sticky message. Congratulations, you’re halfway there! As well as explaining why the facts are right, you also need to explain why the myth is wrong. But there’s a psychological danger to be wary of when refuting misinformation.

When you mention a myth, you make people more familiar with it. But the more familiar people are with a piece of information, the more likely they are to think it’s true. This means you risk a “familiarity backfire effect“, reinforcing the myth in people’s minds.

There are several simple techniques to avoid the familiarity backfire effect. First, put the emphasis on the facts rather than the myth. Lead with the science you wish to communicate rather than the myth. Unfortunately, most debunking articles take the worst possible approach: repeat the myth in the headline.

Second, provide an explicit warning before mentioning the myth. This puts people cognitively on guard so they’re less likely to be influenced by the myth. An explicit warning can be as simple as “A common myth is…”.

Third, explain the fallacy that the myth uses to distort the facts. This gives people the ability to reconcile the facts with the myth. A useful framework for identifying fallacies is the five characteristics of science denial (which includes a number of characteristics, particularly under logical fallacies).

Pulling this all together, if you debunk misinformation with an article, presentation or even in casual conversation, try to lead with a sticky fact. Before you mention the myth, warn people that you’re about to mention a myth. Then explain the fallacy that the myth uses to distort the facts.

Putting into practice

Let me give an example of this debunking technique in action. Say someone says to you that global warming is a myth. Here’s how you might respond:

97% of climate scientists agree that humans are causing global warming. This has been found in a number of studies, using independent methods. A 2009 survey conducted by the University of Illinois found that among actively publishing climate scientists, 97.4% agreed that human activity was increasing global temperatures. A 2010 study from Princeton University analysed public statements about and found that among scientists who had published peer-reviewed research about climate change, 97.5% agreed with the consensus.

I was part of a team that in 2013 found that among relevant climate papers published over 21 years, 97.1% affirmed human-caused global warming.

However, one myth argues that there is no scientific consensus on climate change, citing a petition of 31,000 dissenting scientists. This myth uses the technique of fake experts: 99.9% of those 31,000 scientists are not climate scientists. The qualification to be listed in the petition is a science degree, so that the list includes computer scientists, engineers and medical scientists, but very few with actual expertise in climate science.

And there you have it.

In our online course, Making Sense of Climate Science Denial, we debunk 50 of the most common myths about climate change. Each lecture adopts the Fact-Myth-Fallacy structure where we first explain the science, then introduce the myth then explain the fallacy that the uses.

In our sixth week on the psychology of debunking, we also stress the importance of an evidence-based approach to science communication itself. It would be most ironic, after all, if we were to ignore the science in our response to denial.

John Cook is Climate Communication Research Fellow at The University of Queensland.

Scientists emerge from isolated dome on Hawaii volcano slope

Source: http://phys.org/news/2015-06-scientists-emerge-isolated-dome-hawaii.html
Scientists emerge from isolated dome on Hawaii volcano slope (Update)
In this March 10, 2015, photo provided by the University of Hawaii at Manoa HI-SEAS Human Factors Performance Study, mission commander Martha Lenio collects a soil sample outside of the dome in which six scientists lived an isolated existence to simulate life on a mission to Mars, on the bleak slopes of dormant volcano Mauna Loa near Hilo on the Big Island of Hawaii. The scientists who took part of a human performance study funded by NASA, stepped outside the dome at 8,000 feet elevation to feel fresh air on their skin Saturday, June 13, 2015, the first time they’d ventured out without donning a space suit in eight months. (Neil Scheibelhut/University of Hawaii at Manoa via AP)

Six scientists who were living under a dome on the slopes of a dormant Hawaii volcano for eight months to simulate life on Mars have emerged from isolation.

The crew stepped outside the dome that’s 8,000 feet (2,400 meters) up the slopes of Mauna Loa to feel fresh air on their skin Saturday. It was the first time they left without donning a .

The scientists are part of a human performance study funded by NASA that tracked how they worked together as a team. They have been monitored by surveillance cameras, trackers and electronic surveys.

Crew member Jocelyn Dunn said it was awesome to feel the sensation of wind on her skin.

“When we first walked out the door, it was scary not to have a suit on,” said Dunn, 27, a doctoral candidate at Purdue University. “We’ve been pretending for so long.”

The dome’s volcanic location, silence and its simulated airlock seal provided an atmosphere similar to space. Looking out the dome’s porthole windows, all the scientists could see were lava fields and mountains, said University of Hawaii professor Kim Binsted, principal investigator for the study.

Tracking the ‘ emotions and performance in the isolated environment could help ground crews during future missions to determine if a crew member is becoming depressed or if the team is having communication problems.

“Astronauts are very stoic people, very level-headed, and there’s a certain hesitancy to report problems,” Binsted said. “So this is a way for people on the ground to detect cohesion-related problems before they become a real issue.”

Scientists emerge from isolated dome on Hawaii volcano slope (Update)
This April 9, 2015, photo provided by the University of Hawaii at Manoa HI-SEAS Human Factors Performance Study shows the interior of a dome in which six scientists lived an isolated existence to simulate life on a mission to Mars, on the bleak slopes of dormant volcano Mauna Loa near Hilo on the Big Island of Hawaii. The scientists who took part of a human performance study funded by NASA, stepped outside the dome at 8,000 feet elevation to feel fresh air on their skin Saturday, June 13, 2015, the first time they’d ventured out without donning a space suit in eight months. (Zak Wilson/University of Hawaii at Manoa via AP)

Spending eight months in a confined space with six people had its challenges, but crew members relieved stress doing team workouts and yoga. They were able to use a solar-powered treadmill and stationary bike, but only in the afternoons on sunny days.

“When you’re having a good day its fine, it’s fun. You have friends around to share in the enjoyment of a good day,” Dunn said. “But if you have a bad day, it’s really tough to be in a confined environment. You can’t get out and go for a walk … it’s constantly witnessed by everyone.”

The hardest part was being far away from family and missing events like her sister’s wedding, for which she delivered a toast via video, Dunn said. “I’m glad I was able to be there in that way, but … I just always dreamed of being there to help,” she said.

Scientists emerge from isolated dome on Hawaii volcano slope (Update)
In this photo provided by the University of Hawaii at Manoa HI-SEAS Human Factors Performance Study, six scientists exit a dome that they lived in as part of an isolated existence to simulate life on a mission to Mars, on the bleak slopes of dormant volcano Mauna Loa near Hilo on the Big Island of Hawaii, Saturday, June 13, 2015. The scientists who took part of a human performance study funded by NASA, stepped outside the dome at 8,000 feet elevation to feel fresh air on their skin Saturday, the first time they’d ventured out without donning a space suit in eight months. (Ryan Ogliore/University of Hawaii at Manoa via AP)

The first thing crew members did when they emerged from the dome was to chow down on foods they’ve been craving—juicy watermelon, deviled eggs, peaches and croissants, which was a step up from the freeze dried chili they’d been eating.

Next on Dunn’s list: going for a swim. Showers in the isolated environment were limited to six minutes per week, she said.

“To be able to just submerge myself in water for as long as I want, to feel the sun, will be amazing,” Dunn said. “I feel like a ghost.”

Scientists emerge from isolated dome on Hawaii volcano slope (Update)
In this May 21, 2015 photo from the University of Hawaii at Manoa HI-SEAS Human Factors Performance Study, crew member Sophie Milam conducts a research project outside of the dome in which six scientists lived an isolated existence to simulate life on a mission to Mars, on the bleak slopes of dormant volcano Mauna Loa near Hilo on the Big Island of Hawaii. The scientists who took part of a human performance study funded by NASA, stepped outside the dome at 8,000 feet elevation to feel fresh air on their skin Saturday, June 13, 2015, the first time they’d ventured out without donning a space suit in eight months. (Martha Lenio/University of Hawaii at Manoa via AP)
Scientists emerge from isolated dome on Hawaii volcano slope
This March 10, 2015 photo provided by the University of Hawaii at Manoa HI-SEAS Human Factors Performance Study shows a dome in which six scientists lived an isolated existence to simulate life on a mission to Mars, on the bleak slopes of dormant volcano Mauna Loa near Hilo on the Big Island of Hawaii. The scientists who took part of a human performance study funded by NASA, stepped outside the dome at 8,000 feet elevation to feel fresh air on their skin Saturday, June 13, 2015, the first time they’d ventured out without donning a space suit in eight months. (Neil Scheibelhut/University of Hawaii at Manoa via AP)
Aside

LHC Season 2: First physics at 13 TeV to start tomorrow

Source: http://home.web.cern.ch/about/updates/2015/06/lhc-season-2-first-physics-13-tev-start-tomorrow

In the early morning of Wednesday 3 June, the Large Hadron Collider (LHC) at CERN is set to start delivering physics data to its experiments for the first time in 27 months.

After nearly two years of maintenance and repair, as well as several months of re-commissioning, the experiments at the world’s largest particle accelerator are now ready to take data at the unprecedented energy of 13 teraelectronvolts (TeV) – almost double the collision energy of the LHC’s first, three-year run. Data taking will mark the start of season 2 at the LHC, opening the way to new frontiers in physics.

For all the day’s action, follow our Live Blog “LHC Season 2: New frontiers in physics” where we’ll be posting all the latest from the CERN Control Centre, starting at 7am CEST. (6am UK)

The blog will guide you through key moments in the day, from injecting the counter-rotating beams of protons into the LHC and ramping their energy to 6.5 TeV each, to eventual particle collisions and the start of data taking at 13 TeV. A live webcast will also be available through the live blog.

For more about the big questions that the LHC experiments are tackling, check out “New frontiers in physics” and follow the scientists at the forefront of particle physics.

For more about the LHC and its second run, check out “LHC Season 2: Facts & figures” and “LHC Season 2: A stronger machine


Link to their live blog: http://run2-13tev.web.cern.ch/

LHC Season 2: CERN computing ready for data torrent

Source: http://home.web.cern.ch/about/updates/2015/06/lhc-season-2-cern-computing-ready-data-torrent

Racks of servers at the CERN Data Centre (Image: CERN)

This week, the experiments at the Large Hadron Collider (LHC) will start taking data at thenew energy frontier of 13 teraelectonvolts (TeV) – nearly double the energy of collisions in the LHC’s first three-year run. These collisions, which will occur up to 1 billion times every second, will send showers of particles through the detectors.

With every second of run-time, gigabytes of data will come pouring into the CERN Data Centre to be stored, sorted and shared with physicists worldwide. To cope with this massive influx of Run 2 data, the CERN computing teams focused on three areas: speed, capacity and reliability.

“During Run 1, we were storing 1 gigabyte-per-second, with the occasional peak of 6 gigabytes-per-second,” says Alberto Pace, who leads the Data and Storage Services group within the IT Department. “For Run 2, what was once our “peak” will now be considered average, and we believe we could even go up to 10 gigabytes-per-second if needed.”

At CERN, most of the data is archived on magnetic tape using the CERN Advanced Storage system (CASTOR) and the rest is stored on the EOS disk pool system – a system optimized for fast analysis access by many concurrent users. Magnetic tapes may be seen as an old-fashioned technology. They are actually a robust storage material, able to store huge volumes of data and thus ideal for long-term preservation. The computing teams have improved the software of the tape storage system CASTOR, allowing CERN’s tape drives and libraries to be used more efficiently, with no lag times or delays. This allows the Data Centre to increase the rate of data that can be moved to tape and read back.

Reducing the risk of data loss – and the massive storage burden associated with this – was another challenge to address for Run 2. The computing teams introduced a data ‘chunking’ option in the EOS storage disk system. This splits the data into segments and enables recently acquired data to be kept on disk for quick access. “This allowed our online total data capacity to be increased significantly,” Pace continues. “We have 140 petabytes of raw disk space available for Run 2 data, divided between the CERN Data Centre and the Wigner Data Centre in Budapest, Hungary. This translates to about 60 petabytes of storage, including back-up files.”

140 petabytes (which is equal to 140 million gigabytes) is a very large number indeed – equivalent to over a millenium of full HD-quality movies.

Now, in addition to the regular “replication” approach – whereby a duplicated copy is kept for all data – experiments will now have an option to scatter the data across multiple disks. This “chunking” approach breaks the data into pieces. Use of reconstruction algorithms means that content will not be lost even if multiple disks fail. This not only decreases the probability of data loss, but also cuts in half the space needed for back-up storage. Finally, the EOS system has also been further improved to achieve the goal of more than 99.5% availability for the duration of Run 2.

From quicker storage speeds to new storage solutions, CERN is well-prepared for all of the fantastic challenges of Run 2.

Supernova space bullets could have seeded Earth’s iron core

Source: http://www.newscientist.com/article/dn27570-supernova-space-bullets-could-have-seeded-earths-iron-core.html#.VV5OCk9Vikq

Supernova shoot-em-ups could be responsible for Earth’s iron core. An analysis suggests that certain stars fire off massive iron bullets when they die.

Stars fuse the hydrogen and helium present in the early universe into heavier elements, like iron. When stars reach the end of their lives, they explode in supernovae, littering these elements throughout space where they can eventually form planets.

A particular kind of supernova called a type Ia, the result of the explosion of a dense stellar corpse called a white dwarf star, seems to be responsible for most of the iron on Earth.

These stars also play an important role in our understanding of distance in the universe. That’s because the white dwarfs only blow up when they reach a certain, fixed mass, so we can use the light of these explosions as a “standard candle” to tell how far away they are.

But astronomers still haven’t figured out exactly what causes white dwarfs to hit this critical limit.

“Most of our iron on Earth comes from supernovae of this kind,” says Noam Soker of the Technion Israel Institute of Technology in Haifa. “It is embarrassing that we still don’t know what brings these white dwarfs to explode.”

Lumpy stars

When a star goes supernova, it leaves behind a cloud of ejected material called a supernova remnant. This remnant should be spherical – but some have extra bumps that could offer a clue to the supernova’s origin.

Now Soker and his colleague Danny Tsebrenko say that massive clumps of iron produced within a white dwarf in the process of going supernova could be punching through the remnant like bullets, creating these bumps. The iron bullets aren’t solid chunks of metal, but a more diffuse cloud of molecules.

Some supernova remnants have two bumps on opposite sides, which the researchers call “ears”.

The iron bullets form along the rotation axis of an exploding white dwarf, firing out at either end, says Soker. A white dwarf can only be spinning fast enough to allow this if it is the result of two smaller dwarfs merging, he adds.

The bullets could also shed light on our origins. Soker and Tsebrenko estimate that these clouds of iron would be several times the mass of Jupiter. They would spread and could eventually seed dust clouds with iron that would go on to form stars and planets, providing an origin for Earth’s core, says Soker.

Reference: arxiv.org/abs/1505.02034v1

First images of collisions at 13 TeV

Source: http://home.web.cern.ch/about/updates/2015/05/first-images-collisions-13-tev

Test collisions continue today at 13 TeV in the Large Hadron Collider (LHC) to prepare the detectors ALICE, ATLAS, CMS, LHCb, LHCf, MOEDAL and TOTEM for data-taking, planned for early June (Image: LHC page 1)

Last night, protons collided in the Large Hadron Collider (LHC) at the record-breaking energy of 13 TeV for the first time. These test collisions were to set up systems that protect the machine and detectors from particles that stray from the edges of the beam.

A key part of the process was the set-up of the collimators. These devices which absorb stray particles were adjusted in colliding-beam conditions. This set-up will give the accelerator team the data they need to ensure that the LHC magnets and detectors are fully protected.

Today the tests continue. Colliding beams will stay in the LHC for several hours. The LHC Operations team will continue to monitor beam quality and optimisation of the set-up.

This is an important part of the process that will allow the experimental teams running the detectors ALICE, ATLAS, CMS, LHCb, LHCf, MOEDAL and TOTEM to switch on their experiments fully. Data taking and the start of the LHC’s second run is planned for early June.

Protons collide at 13 TeV sending showers of particles through the ALICE detector (Image: ALICE)
Protons collide at 13 TeV sending showers of particles through the CMS detector (Image: CMS)
Protons collide at 13 TeV sending showers of particles through the ATLAS detector (Image: ATLAS)
Protons collide at 13 TeV sending showers of particles through the LHCb detector (Image: LHCb)
Protons collide at 13 TeV sending showers of particles through the TOTEM detector (Image: TOTEM)

Quantum physics: What is really real?

Source: http://www.nature.com/news/quantum-physics-what-is-really-real-1.17585

A wave of experiments is probing the root of quantum weirdness.

Zeeya Merali

20 May 2015

Dan Harris/MIT

An experiment showing that oil droplets can be propelled across a fluid bath by the waves they generate has prompted physicists to reconsider the idea that something similar allows particles to behave like waves.

Owen Maroney worries that physicists have spent the better part of a century engaging in fraud.

Ever since they invented quantum theory in the early 1900s, explains Maroney, who is himself a physicist at the University of Oxford, UK, they have been talking about how strange it is — how it allows particles and atoms to move in many directions at once, for example, or to spin clockwise and anticlockwise simultaneously. But talk is not proof, says Maroney. “If we tell the public that quantum theory is weird, we better go out and test that’s actually true,” he says. “Otherwise we’re not doing science, we’re just explaining some funny squiggles on a blackboard.”

It is this sentiment that has led Maroney and others to develop a new series of experiments to uncover the nature of the wavefunction — the mysterious entity that lies at the heart of quantum weirdness. On paper, the wavefunction is simply a mathematical object that physicists denote with the Greek letter psi (Ψ) — one of Maroney’s funny squiggles — and use to describe a particle’s quantum behaviour. Depending on the experiment, the wavefunction allows them to calculate the probability of observing an electron at any particular location, or the chances that its spin is oriented up or down. But the mathematics shed no light on what a wavefunction truly is. Is it a physical thing? Or just a calculating tool for handling an observer’s ignorance about the world?

The tests being used to work that out are extremely subtle, and have yet to produce a definitive answer. But researchers are optimistic that a resolution is close. If so, they will finally be able to answer questions that have lingered for decades. Can a particle really be in many places at the same time? Is the Universe continually dividing itself into parallel worlds, each with an alternative version of ourselves? Is there such a thing as an objective reality at all?

“These are the kinds of questions that everybody has asked at some point,” says Alessandro Fedrizzi, a physicist at the University of Queensland in Brisbane, Australia. “What is it that is really real?”

Ignorance is bliss

From a practical perspective, its nature does not matter. The textbook Copenhagen interpretation of quantum theory, developed in the 1920s mainly by physicists Niels Bohr and Werner Heisenberg, treats the wavefunction as nothing more than a tool for predicting the results of observations, and cautions physicists not to concern themselves with what reality looks like underneath. “You can’t blame most physicists for following this ‘shut up and calculate’ ethos because it has led to tremendous developments in nuclear physics, atomic physics, solid-state physics and particle physics,” says Jean Bricmont, a statistical physicist at the Catholic University of Louvain in Belgium. “So people say, let’s not worry about the big questions.”

But some physicists worried anyway. By the 1930s, Albert Einstein had rejected the Copenhagen interpretation — not least because it allowed two particles to entangle their wavefunctions, producing a situation in which measurements on one could instantaneously determine the state of the other even if the particles were separated by vast distances. Rather than accept such “spooky action at a distance”, Einstein preferred to believe that the particles’ wavefunctions were incomplete. Perhaps, he suggested, the particles have some kind of ‘hidden variables’ that determine the outcome of the measurement, but that quantum theories do not capture.

Experiments since then have shown that this spooky action at a distance is quite real, which rules out the particular version of hidden variables that Einstein advocated. But that has not stopped other physicists from coming up with interpretations of their own. These interpretations fall into two broad camps. There are those that agree with Einstein that the wavefunction represents our ignorance — what philosophers call psi-epistemic models. And there are those that view the wavefunction as a real entity — psi-ontic models.

To appreciate the difference, consider a thought experiment that Schrödinger described in a 1935 letter to Einstein. Imagine that a cat is enclosed in a steel box. And imagine that the box also contains a sample of radioactive material that has a 50% probability of emitting a decay product in one hour, along with an apparatus that will poison the cat if it detects such a decay. Because radioactive decay is a quantum event, wrote Schrödinger, the rules of quantum theory state that, at the end of the hour, the wavefunction for the box’s interior must be an equal mixture of live cat and dead cat.

“Crudely speaking,” says Fedrizzi, “in a psi-epistemic model the cat in the box is either alive or it’s dead and we just don’t know because the box is closed.” But most psi-ontic models agree with the Copenhagen interpretation: until an observer opens the box and looks, the cat is both alive and dead.

But this is where the debate gets stuck. Which of quantum theory’s many interpretations — if any — is correct? That is a tough question to answer experimentally, because the differences between the models are subtle: to be viable, they have to predict essentially the same quantum phenomena as the very successful Copenhagen interpretation. Andrew White, a physicist at the University of Queensland, says that for most of his 20-year career in quantum technologies “the problem was like a giant smooth mountain with no footholds, no way to attack it”.

That changed in 2011, with the publication of a theorem about quantum measurements that seemed to rule out the wavefunction-as-ignorance models2. On closer inspection, however, the theorem turned out to leave enough wiggle room for them to survive. Nonetheless, it inspired physicists to think seriously about ways to settle the debate by actually testing the reality of the wavefunction. Maroney had already devised an experiment that should work in principle3, and he and others soon found ways to make it work in practice4, 5, 6. The experiment was carried out last year by Fedrizzi, White and others7.

To illustrate the idea behind the test, imagine two stacks of playing cards. One contains only red cards; the other contains only aces. “You’re given a card and asked to identify which deck it came from,” says Martin Ringbauer, a physicist also at the University of Queensland. If it is a red ace, he says, “there’s an overlap and you won’t be able to say where it came from”. But if you know how many of each type of card is in each deck, you can at least calculate how often such ambiguous situations will arise.

Out on a limb

A similar ambiguity occurs in quantum systems. It is not always possible for a single measurement in the lab to distinguish how a photon is polarized, for example. “In real life, it’s pretty easy to tell west from slightly south of west, but in quantum systems, it’s not that simple,” says White. According to the standard Copenhagen interpretation, there is no point in asking what the polarization is because the question does not have an answer — or at least, not until another measurement can determine that answer precisely. But according to the wavefunction-as-ignorance models, the question is perfectly meaningful; it is just that the experimenters — like the card-game player — do not have enough information from that one measurement to answer. As with the cards, it is possible to estimate how much ambiguity can be explained by such ignorance, and compare it with the larger amount of ambiguity allowed by standard theory.

That is essentially what Fedrizzi’s team tested. The group measured polarization and other features in a beam of photons and found a level of overlap that could not be explained by the ignorance models. The results support the alternative view that, if objective reality exists, then the wavefunction is real. “It’s really impressive that the team was able to address a profound issue, with what’s actually a very simple experiment,” says Andrea Alberti, a physicist at the University of Bonn in Germany.

The conclusion is still not ironclad, however: because the detectors picked up only about one-fifth of the photons used in the test, the team had to assume that the lost photons were behaving in the same way7. That is a big assumption, and the group is currently working on closing the sampling gap to produce a definitive result. In the meantime, Maroney’s team at Oxford is collaborating with a group at the University of New South Wales in Australia, to perform similar tests with ions, which are easier to track than photons. “Within the next six months we could have a watertight version of this experiment,” says Maroney.

But even if their efforts succeed and the wavefunction-as-reality models are favoured, those models come in a variety of flavours — and experimenters will still have to pick them apart.

One of the earliest such interpretations was set out in the 1920s by French physicist Louis de Broglie8, and expanded in the 1950s by US physicist David Bohm9, 10. According to de Broglie–Bohm models, particles have definite locations and properties, but are guided by some kind of ‘pilot wave’ that is often identified with the wavefunction. This would explain the double-slit experiment because the pilot wave would be able to travel through both slits and produce an interference pattern on the far side, even though the electron it guided would have to pass through one slit or the other.

In 2005, de Broglie–Bohmian mechanics received an experimental boost from an unexpected source. Physicists Emmanuel Fort, now at the Langevin Institute in Paris, and Yves Couder at the University of Paris Diderot gave the students in an undergraduate laboratory class what they thought would be a fairly straightforward task: build an experiment to see how oil droplets falling into a tray filled with oil would coalesce as the tray was vibrated. Much to everyone’s surprise, ripples began to form around the droplets when the tray hit a certain vibration frequency. “The drops were self-propelled — surfing or walking on their own waves,” says Fort. “This was a dual object we were seeing — a particle driven by a wave.”

Since then, Fort and Couder have shown that such waves can guide these ‘walkers’ through the double-slit experiment as predicted by pilot-wave theory, and can mimic other quantum effects, too11. This does not prove that pilot waves exist in the quantum realm, cautions Fort. But it does show how an atomic-scale pilot wave might work. “We were told that such effects cannot happen classically,” he says, “and here we are, showing that they do.”

“We were told that such effects cannot happen classically, and here we are, showing that they do.”

Another set of reality-based models, devised in the 1980s, tries to explain the strikingly different properties of small and large objects. “Why electrons and atoms can be in two different places at the same time, but tables, chairs, people and cats can’t,” says Angelo Bassi, a physicist at the University of Trieste, Italy. Known as ‘collapse models’, these theories postulate that the wavefunctions of individual particles are real, but can spontaneously lose their quantum properties and snap the particle into, say, a single location. The models are set up so that the odds of this happening are infinitesimal for a single particle, so that quantum effects dominate at the atomic scale. But the probability of collapse grows astronomically as particles clump together, so that macroscopic objects lose their quantum features and behave classically.

One way to test this idea is to look for quantum behaviour in larger and larger objects. If standard quantum theory is correct, there is no limit. And physicists have already carried out double-slit interference experiments with large molecules12. But if collapse models are correct, then quantum effects will not be apparent above a certain mass. Various groups are planning to search for such a cut-off using cold atoms, molecules, metal clusters and nanoparticles. They hope to see results within a decade. “What’s great about all these kinds of experiments is that we’ll be subjecting quantum theory to high-precision tests, where it’s never been tested before,” says Maroney.

Parallel worlds

One wavefunction-as-reality model is already famous and beloved by science-fiction writers: the many-worlds interpretation developed in the 1950s by Hugh Everett, who was then a graduate student at Princeton University in New Jersey. In the many-worlds picture, the wavefunction governs the evolution of reality so profoundly that whenever a quantum measurement is made, the Universe splits into parallel copies. Open the cat’s box, in other words, and two parallel worlds will branch out — one with a living cat and another containing a corpse.

Distinguishing Everett’s many-worlds interpretation from standard quantum theory is tough because both make exactly the same predictions. But last year, Howard Wiseman at Griffith University in Brisbane and his colleagues proposed a testable multiverse model13. Their framework does not contain a wavefunction: particles obey classical rules such as Newton’s laws of motion. The weird effects seen in quantum experiments arise because there is a repulsive force between particles and their clones in parallel universes. “The repulsive force between them sets up ripples that propagate through all of these parallel worlds,” Wiseman says.

Using computer simulations with as many as 41 interacting worlds, they have shown that this model roughly reproduces a number of quantum effects, including the trajectories of particles in the double-slit experiment13. The interference pattern becomes closer to that predicted by standard quantum theory as the number of worlds increases. Because the theory predicts different results depending on the number of universes, says Wiseman, it should be possible to devise ways to check whether his multiverse model is right — meaning that there is no wavefunction, and reality is entirely classical.

Because Wiseman’s model does not need a wavefunction, it will remain viable even if future experiments rule out the ignorance models. Also surviving would be models, such as the Copenhagen interpretation, that maintain there is no objective reality — just measurements.

But then, says White, that is the ultimate challenge. Although no one knows how to do it yet, he says, “what would be really exciting is to devise a test for whether there is in fact any objective reality out there at all.”

Astronomers observe a supernova colliding with its companion star

Source: http://phys.org/news/2015-05-astronomers-supernova-colliding-companion-star.html
Caltech astronomers observe a supernova colliding with its companion star
A still from a simulation of a Type Ia supernova. In the simulation, a Type Ia supernova explodes (dark brown color). The supernova material is ejected outwards at a velocity ofabout 10,000 km/s. The ejected material slams into its companion …more

Type Ia supernovae, one of the most dazzling phenomena in the universe, are produced when small dense stars called white dwarfs explode with ferocious intensity. At their peak, these supernovae can outshine an entire galaxy. Although thousands of supernovae of this kind were found in the last decades, the process by which a white dwarf becomes one has been unclear.

That began to change on May 3, 2014, when a team of Caltech astronomers working on a robotic observing system known as the intermediate Palomar Transient Factory (iPTF)—a multi-institute collaboration led by Shrinivas Kulkarni, the John D. and Catherine T. MacArthur Professor of Astronomy and Planetary Science and director of the Caltech Optical Observatories—discovered a Type Ia supernova, designated iPTF14atg, in nearby galaxy IC831, located 300 million light-years away.

The data that were immediately collected by the iPTF team lend support to one of two competing theories about the origin of white dwarf supernovae, and also suggest the possibility that there are actually two distinct populations of this type of supernova.

The details are outlined in a paper with Caltech graduate student Yi Cao the lead author, appearing May 21 in the journalNature.

Type Ia supernovae are known as “standardizable candles” because they allow astronomers to gauge cosmic distances by how dim they appear relative to how bright they actually are. It is like knowing that, from one mile away, a light bulb looks 100 times dimmer than another located only one-tenth of a mile away. This consistency is what made these stellar objects instrumental in measuring the accelerating expansion of the universe in the 1990s, earning three scientists the Nobel Prize in Physics in 2011.

There are two competing origin theories, both starting with the same general scenario: the white dwarf that eventually explodes is one of a pair of stars orbiting around a common center of mass. The interaction between these two stars, the theories say, is responsible for triggering supernova development. What is the nature of that interaction? At this point, the theories diverge.

According to one theory, the so-called double-degenerate model, the companion to the exploding white dwarf is also a white dwarf, and the initiates when the two similar objects merge.

However, in the second theory, called the single-degenerate model, the second star is instead a sunlike star—or even a red giant, a much larger type of star. In this model, the white dwarf’s powerful gravity pulls, or accretes, material from the second star. This process, in turn, increases the temperature and pressure in the center of the white dwarf until a runaway nuclear reaction begins, ending in a dramatic explosion.

The difficulty in determining which model is correct stems from the facts that supernova events are very rare—occurring about once every few centuries in our galaxy—and that the stars involved are very dim before the explosions.

That is where the iPTF comes in. From atop Palomar Mountain in Southern California, where it is mounted on the 48-inch Samuel Oschin Telescope, the project’s fully automated camera optically surveys roughly 1000 square degrees of sky per night (approximately 1/20th of the visible sky above the horizon), looking for transients—objects, including Type Ia supernovae, whose brightness changes over timescales that range from hours to days.

On May 3, the iPTF took images of IC831 and transmitted the data for analysis to computers at the National Energy Research Scientific Computing Center, where a machine-learning algorithm analyzed the images and prioritized real celestial objects over digital artifacts. Because this first-pass analysis occurred when it was nighttime in the United States but daytime in Europe, the iPTF’s European and Israeli collaborators were the first to sift through the prioritized objects, looking for intriguing signals. After they spotted the possible supernova—a signal that had not been visible in the images taken just the night before—the European and Israeli team alerted their U.S. counterparts, including Caltech graduate student and iPTF team member Yi Cao.

Cao and his colleagues then mobilized both ground- and space-based telescopes, including NASA’s Swift satellite, which observes ultraviolet (UV) light, to take a closer look at the young supernova.

“My colleagues and I spent many sleepless nights on designing our system to search for luminous ultraviolet emission from baby Type Ia supernovae,” says Cao. “As you can imagine, I was fired up when I first saw a bright spot at the location of this supernova in the ultraviolet image. I knew this was likely what we had been hoping for.”

UV radiation has higher energy than visible light, so it is particularly suited to observing very hot objects like supernovae (although such observations are possible only from space, because Earth’s atmosphere and ozone later absorbs almost all of this incoming UV). Swift measured a pulse of UV radiation that declined initially but then rose as the supernova brightened. Because such a pulse is short-lived, it can be missed by surveys that scan the sky less frequently than does the iPTF.

This observed ultraviolet pulse is consistent with a formation scenario in which the material ejected from a supernova explosion slams into a companion star, generating a shock wave that ignites the surrounding material. In other words, the data are in agreement with the single-degenerate model.

Back in 2010, Daniel Kasen, an associate professor of astronomy and physics at UC Berkeley and Lawrence Berkeley National Laboratory, used theoretical calculations and supercomputer simulations to predict just such a pulse from supernova-companion collisions. “After I made that prediction, a lot of people tried to look for that signature,” Kasen says. “This is the first time that anyone has seen it. It opens up an entirely new way to study the origins of exploding stars.”

According to Kulkarni, the discovery “provides direct evidence for the existence of a companion star in a Type Ia supernova, and demonstrates that at least some Type Ia supernovae originate from the single-degenerate channel.”

Although the data from supernova iPTF14atg support it being made by a single-degenerate system, other Type Ia supernovae may result from double-degenerate systems. In fact, observations in 2011 of SN2011fe, another Type Ia supernova discovered in the nearby galaxy Messier 101 by PTF (the precursor to the iPTF), appeared to rule out the single-degenerate model for that particular supernova. And that means that both theories actually may be valid, says Caltech professor of theoretical astrophysics Sterl Phinney, who was not involved in the research. “The news is that it seems that both sets of theoretical models are right, and there are two very different kinds of Type Ia supernovae.”

“Both rapid discovery of supernovae in their infancy by iPTF, and rapid follow-up by the Swift satellite, were essential to unveil the companion to this exploding white dwarf. Now we have to do this again and again to determine the fractions of Type Ia supernovae akin to different origin theories,” says iPTF team member Mansi Kasliwal, who will join the Caltech astronomy faculty as an assistant in September 2015.

The iPTF project is a scientific collaboration between Caltech; Los Alamos National Laboratory; the University of Wisconsin-Milwaukee; the Oskar Klein Centre in Sweden; the Weizmann Institute of Science in Israel; the TANGO Program of the University System of Taiwan; and the Kavli Institute for the Physics and Mathematics of the Universe in Japan.

What happens when Newton’s third law is broken?

Source: http://phys.org/news/2015-05-newton-law-broken.html
newton's third law
In the new experiments, two layers of microparticles levitating at two different heights above an electrode have allowed researchers to investigate the statistical mechanics of nonreciprocal interactions, which violate Newton’s third law. Credit: A. V. Ivlev, et al. CC-BY-3.0

Even if you don’t know it by name, everyone is familiar with Newton’s third law, which states that for every action, there is an equal and opposite reaction. This idea can be seen in many everyday situations, such as when walking, where a person’s foot pushes against the ground, and the ground pushes back with an equal and opposite force. Newton’s third law is also essential for understanding and developing automobiles, airplanes, rockets, boats, and many other technologies.

Even though it is one of the fundamental laws of physics, Newton’s third law can be violated in certain nonequilibrium (out-of-balance) situations. When two objects or particles violate the third law, they are said to have nonreciprocal interactions. Violations can occur when the environment becomes involved in the interaction between the two particles in some way, such as when an environment moves with respect to the two particles. (Of course, Newton’s law still holds for the complete “particles-plus-environment” system.)

Although there have been numerous experiments on particles with nonreciprocal interactions, not as much is known about what’s happening on the microscopic level—the —of these systems.

In a new paper published in Physical Review X, Alexei Ivlev, et al., have investigated the statistical mechanics of different types of nonreciprocal interactions and discovered some surprising results—such as that extreme temperature gradients can be generated on the particle scale.

“I think the greatest significance of our work is that we rigorously showed that certain classes of essentially nonequilibrium systems can be exactly described in terms of the equilibrium’s statistical mechanics (i.e., one can derive a pseudo-Hamiltonian which describes such systems),” Ivlev, at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany, toldPhys.org. “One of the most amazing implications is that, for example, one can observe a mixture of two liquids in detailed equilibrium, yet each liquid has its own temperature.”

One example of a system with nonreciprocal interactions that the researchers experimentally demonstrated in their study involves charged microparticles levitating above an electrode in a plasma chamber. The violation of Newton’s third law arises from the fact that the system involves two types of microparticles that levitate at different heights due to their different sizes and densities. The in the chamber drives a vertical plasma flow, like a current in a river, and each charged microparticle focuses the flowing plasma ions downstream, creating a vertical plasma wake behind it.

Although the repulsive forces that occur due to the direct interactions between the two layers of particles are reciprocal, the attractive particle-wake forces between the two layers are not. This is because the wake forces decrease with distance from the electrode, and the layers are levitating at different heights. As a result, the lower layer exerts a larger total force on the upper layer of particles than the upper layer exerts on the lower layer of particles. Consequently, the upper layer has a higher average kinetic energy (and thus a higher temperature) than the lower layer. By tuning the electric field, the researchers could also increase the height difference between the two layers, which further increases the temperature difference.

“Usually, I’m rather conservative when thinking on what sort of ‘immediate’ potential application a particular discovery (at least, in physics) might have,” Ivlev said. “However, what I am quite confident of is that our results provide an important step towards better understanding of certain kinds of nonequilibrium systems. There are numerous examples of very different nonequilibrium systems where the action-reaction symmetry is broken for interparticle interactions, but we show that one can nevertheless find an underlying symmetry which allows us to describe such systems in terms of the textbook (equilibrium) statistical mechanics.”

While the plasma experiment is an example of action-reaction symmetry breaking in a 2D system, the same symmetry breaking can occur in 3D systems, as well. The scientists expect that both types of systems exhibit unusual and remarkable behavior, and they hope to further investigate these systems more in the future.

“Our current research is focused on several topics in this direction,” Ivlev said. “One is the effect of the action-reaction in the overdamped colloidal suspensions, where the nonreciprocal interactions lead to a remarkably rich variety of self-organization phenomena (dynamical clustering, pattern formation, phase separation, etc.). Results of this research may lead to several interesting applications. Another topic is purely fundamental: how one can describe a much broader class of ‘nearly Hamiltonian’ nonreciprocal systems, whose interactions almost match with those described by a pseudo-Hamiltonian? Hopefully, we can report on these results very soon.”

Physicist finds mysterious anti-electron clouds inside thunderstorm

Source: http://phys.org/news/2015-05-physicist-mysterious-anti-electron-clouds-thunderstorm.html
Physicist finds mysterious anti-electron clouds inside thunderstorm
Lightning and severe weather are two of the most visible products of thunderstorms. However, scientists are discovering that the storms also contain a fascinating variety of strange phenomena, including powerful gamma-ray flashes and puzzling …more

A terrifying few moments flying into the top of an active thunderstorm in a research aircraft has led to an unexpected discovery that could help explain the longstanding mystery of how lightning gets initiated inside a thunderstorm.

University of New Hampshire physicist Joseph Dwyer and lightning science colleagues from the University of California at Santa Cruz and Florida Tech describe the turbulent encounter and discovery in a paper to be published in the Journal of Plasma Physics.

In August 2009, Dwyer and colleagues were aboard a National Center for Atmospheric Research Gulfstream V when it inadvertently flew into the extremely violent thunderstorm—and, it turned out, through a large cloud of positrons, the antimatter opposite of electrons, that should not have been there.

To encounter a cloud of positrons without other associated physical phenomena such as energetic was completely unexpected, thoroughly perplexing and contrary to currently understood physics.

“The fact that, apparently out of nowhere, the number of positrons around us suddenly increased by more than a factor of 10 and formed a cloud around the aircraft is very hard to understand. We really have no good explanation for it,” says Dwyer, a lightning expert and the UNH Peter T. Paul Chair in Space Sciences at the Institute for the Study of Earth, Oceans, and Space.

It is known that can sometimes make flashes of energetic , which may produce pairs of electrons and positrons when they interact with air. But the appearance of positrons should then coincide with a large increase in the number of gamma rays.

“We should have seen bright gamma-ray emissions along with the positrons,” Dwyer says. “But in our observations, we first saw a positron cloud, then another positron cloud about seven kilometers away and then we saw a bright gamma-ray glow afterwards. So it’s all not making a whole lot of sense.”

Adds coauthor David Smith of the UC Santa Cruz, “We expected the thunderstorm to make some forms of radiation but not this. We don’t even know whether it’s something nature can do on its own or only happens when you toss an airplane into the mix.”
The physical world is filled with normal matter and antimatter. For every normal particle there’s an antiparticle, such as an electron and its associated anti-particle, called the positron, which, when brought together, annihilate each other in a flash of gamma rays. It is, Dwyer points out, the very same process that is supposed to power Star Trek’s Starship Enterprise.

Having boldly gone where few people should, Dwyer says the experience inside the belly of the beast provides further insight into the bizarre and largely unknown world of thunderstorms—an alien world of gamma rays, high-energy particles accelerated to nearly the speed of light and strange clouds of antimatter positrons.

One possible explanation for the sudden appearance of positrons is that the aircraft itself dramatically influenced the electrical environment of the thunderstorm but that, Dwyer says, would be very surprising. It’s also possible the researchers were detecting a kind of exotic electrical discharge inside the thunderstorm that involves positrons.

“This is the idea of ‘dark lightning,’ which makes a lot of positrons,” says Dwyer. “In detecting the positrons, it’s possible we were seeing sort of the fingerprint of dark lightning. It’s possible, but none of the explanations are totally satisfying.”

Dark lightning is an exotic type of electrical discharge within thunderstorms and is an alternative to normal lightning. In dark lightning, high-energy particles are accelerated and produce , which help discharge the electric field.

Says Dwyer, “We really don’t understand how lightning gets started very well because we don’t understand the electrical environment of thunderstorms. This positron phenomenon could be telling us something new about how thunderstorms charge up and make lightning, but our finding definitely complicates things because it doesn’t fit into the picture that was developing.”

Physicists Are Philosophers, Too

Source: http://www.scientificamerican.com/article/physicists-are-philosophers-too/

By Victor J. Stenger, James A. Lindsay and Peter Boghossian

An engraving depicting a man pondering the nature of the universe

The ongoing feud between physicists and philosophers cuts to the heart of what science can tell us about the nature of reality.

Editor’s Note: Shortly before his death last August at the age of 79, the noted physicist and public intellectual Victor Stenger worked with two co-authors to pen an article for Scientific American. In it Stenger and co-authors address the latest eruption of a long-standing historic feud, an argument between physicists and philosophers about the nature of their disciplines and the limits of science. Can instruments and experiments (or pure reason and theoretical models) ever reveal the ultimate nature of reality? Does the modern triumph of physics make philosophy obsolete? What philosophy, if any, could modern theoretical physicists be said to possess? Stenger and his co-authors introduce and address all these profound questions in this thoughtful essay and seek to mend the growing schism between these two great schools of thought. When physicists make claims about the universe, Stenger writes, they are also engaging in a grand philosophical tradition that dates back thousands of years. Inescapably, physicists are philosophers, too. This article, Stenger’s last, appears in full below.

In April 2012 theoretical physicist, cosmologist and best-selling author Lawrence Krauss was pressed hard in an interview with Ross Andersen for The Atlantic titled “Has Physics Made Philosophy and Religion Obsolete?” Krauss’s response to this question dismayed philosophers because he remarked, “philosophy used to be a field that had content,” to which he later added,

“Philosophy is a field that, unfortunately, reminds me of that old Woody Allen joke, “those that can’t do, teach, and those that can’t teach, teach gym.” And the worst part of philosophy is the philosophy of science; the only people, as far as I can tell, that read work by philosophers of science are other philosophers of science. It has no impact on physics whatsoever, and I doubt that other philosophers read it because it’s fairly technical. And so it’s really hard to understand what justifies it. And so I’d say that this tension occurs because people in philosophy feel threatened—and they have every right to feel threatened, because science progresses and philosophy doesn’t.”

Later that year Krauss had a friendly discussion with philosopher Julian Baggini inThe Observer, an online magazine from The Guardian. Although showing great respect for science and agreeing with Krauss and most other physicists and cosmologists that there isn’t “more stuff in the universe than the stuff of physical science,” Baggini complained that Krauss seems to share “some of science’s imperialist ambitions.” Baggini voices the common opinion that “there are some issues of human existence that just aren’t scientific. I cannot see how mere facts could ever settle the issue of what is morally right or wrong, for example.”

Krauss does not see it quite that way. Rather he distinguishes between “questions that are answerable and those that are not,” and the answerable ones mostly fall into the “domain of empirical knowledge, aka science.” As for moral questions, Krauss claims that they only be answered by “reason…based on empirical evidence.” Baggini cannot see how any “factual discovery could ever settle a question of right and wrong.”

Nevertheless, Krauss expresses sympathy with Baggini’s position, saying, “I do think philosophical discussion can inform decision-making in many important ways—by allowing reflections on facts, but that ultimately the only source of facts is via empirical exploration.”

Noted philosophers were upset with The Atlantic interview, including Daniel Dennett of Tufts University who wrote to Krauss. As a result, Krauss penned a more carefulexplication of his position that was published in Scientific American in 2014 under the title “The Consolation of Philosophy.” There he was more generous to philosophy’s contribution to the enrichment of his own thinking, although he conceded little of his basic position:

“As a practicing physicist…I, and most of the colleagues with whom I have discussed this matter, have found that philosophical speculations about physics and the nature of science are not particularly useful, and have had little or no impact upon progress in my field. Even in several areas associated with what one can rightfully call the philosophy of science I have found the reflections of physicists to be more useful.”

Krauss is not alone among physicists in his disdain for philosophy. In September 2010 physicists Stephen Hawking and Leonard Mlodinow published a shot heard round the world—and not just the academic world. On the first page of their book, The Grand Design, they wrote: “Philosophy is dead” because “philosophers have not kept up with modern developments in science, particularly physics. Scientists have become the bearers of the torch of discovery in our quest for knowledge.”

The questions that philosophy is no longer capable of handling (if it ever was) include: How does the universe behave? What is the nature of reality? Where did all this come from? Did the universe need a creator? According to Hawking and Mlodinow, only scientists—not philosophers—can provide the answers.

Famous astrophysicist and science popularizer Neil deGrasse Tyson has joined the debate. In an interview on the Nerdist podcast in May 2014 Tyson remarked, “My concern here is that the philosophers believe they are actually asking deep questions about nature. And to the scientist it’s, ‘What are you doing? Why are you concerning yourself with the meaning of meaning?’” His overall message was clear: science moves on; philosophy stays mired, useless and effectively dead.

Needless to say, Tyson also has been heavily criticized for his views. His position can be greatly clarified by viewing the video of his appearance in a forum at Howard University in 2010, where he was on the stage with biologist Richard Dawkins. Tyson’s argument is straightforward and is the same as expressed by Krauss: Philosophers from the time of Plato and Aristotle have claimed that knowledge about the world can be obtained by pure thought alone. As Tyson explained, such knowledge cannot be obtained by someone sitting back in an armchair. It can only be gained by observation and experiment. Richard Feynman had once expressed a similar opinion about “armchair philosophers.” Dawkins agreed with Tyson, pointing out that natural selection was discovered by two naturalists, Charles Darwin and Alfred Russel Wallace, who worked in the field gathering data.

What we are seeing here is not a recent phenomenon. In his 1992 book Dreams of a Final Theory, Nobel laureate Steven Weinberg has a whole chapter entitled “Against Philosophy.” Referring to the famous observation of Nobel laureate physicist Eugene Wigner about “the unreasonable effectiveness of mathematics,” Weinberg puzzles about “the unreasonable ineffectiveness of philosophy.”

Weinberg does not dismiss all of philosophy, just the philosophy of science, noting that its arcane discussions interest few scientists. He points out the problems with the philosophy of positivism, although he agrees that it played a role in the early development of both relativity and quantum mechanics. He argues that positivism did more harm than good, however, writing, “The positivist concentration on observables like particle positions and momenta has stood in the way of a ‘realist’ interpretation of quantum mechanics, in which the wave function is the representative of physical reality.”

Perhaps the most influential positivist was late 19th-century philosopher and physicist Ernst Mach, who refused to accept the atomic model of matter because he could not see atoms. Today we can see atoms with a scanning tunneling microscope but our models still contain unseen objects such as quarks. Philosophers as well as physicists no longer take positivism seriously, and so it has no remaining influence on physics, good or bad.

Nevertheless, most physicists would agree with Krauss and Tyson that observation is the only reliable source of knowledge about the natural world. Some, but not all, incline toward instrumentalism, in which theories are merely conceptual tools for classifying, systematizing and predicting observational statements. Those conceptual tools may include nonobservable objects such as quarks.

Until very recently in history no distinction was made between physics and natural philosophy. Thales of Miletus (circa 624–546 B.C.) is generally regarded as the first physicist as well as the first philosopher of the Western tradition. He sought natural explanations for phenomena that made no reference to mythology. For example, he explained earthquakes to be the result of Earth resting on water and being rocked by waves. He reasoned this from observation, not pure thought: Land is surrounded by water and boats on water are seen to rock. Although Thales’ explanation for earthquakes was not correct, it was still an improvement over the mythology that they are caused by the god Poseidon striking the ground with his trident.

Thales is famous for predicting an eclipse of the sun that modern astronomers calculate occurred over Asia Minor on May 28, 585 B.C. Most historians today, however, doubt the truth of this tale. Thales’ most significant contribution was to propose that all material substances are composed of a single elementary constituent—namely, water. Whereas he was (not unreasonably) wrong about water being elementary, Thales’ proposal represents the first recorded attempt, at least in the West, to explain the nature of matter without the invocation of invisible spirits.

Thales and other Ionian philosophers who followed espoused a view of reality now called material monism in which everything is matter and nothing else. Today this remains the prevailing view of physicists, who find no need to introduce supernatural elements into their models, which successfully describe all their observations to date.

The rift to which Tyson was referring formed when physics and natural philosophy began to diverge into separate disciplines in the 17th century after Galileo and Newton introduced the principles that describe the motion of bodies. Newton was able to derive from first principles the laws of planetary motion that had been discovered earlier by Kepler. The successful prediction of the return of Halley’s Comet in 1759 demonstrated the great power of the new science for all to see.

The success of Newtonian physics opened up the prospect for a philosophical stance that became known as the clockwork universe, or alternatively, the Newtonian world machine. According to this scheme, the laws of mechanics determine everything that happens in the material world. In particular, there is no place for a god who plays an active role in the universe. As shown by the French mathematician, astronomer and physicist Pierre-Simon Laplace, Newton’s laws were in themselves sufficient to explain the movement of the planets throughout previous history. This led him to propose a radical notion that Newton had rejected: Nothing besides physics is needed to understand the physical universe.

Whereas the clockwork universe has been invalidated by the Heisenberg uncertainty principle of quantum mechanics, quantum mechanics remains devilishly hard to interpret philosophically. Rather than say physics “understands” the universe, it is more accurate to say that the models of physics remain sufficient to describe the material world as we observe it to be with our eyes and instruments.

In the early part of the 20th century almost all the famous physicists of the era—Albert Einstein, Niels Bohr, Erwin Schrödinger, Werner Heisenberg, Max Born, among others—considered the philosophical ramifications of their revolutionary discoveries in relativity and quantum mechanics. After World War II, however, the new generation of prominent figures in physics—Richard Feynman, Murray Gell-Mann, Steven Weinberg, Sheldon Glashow and others—found such musings unproductive, and most physicists (there were exceptions in both eras) followed their lead. But the new generation still went ahead and adopted philosophical doctrines, or at least spoke in philosophical terms, without admitting it to themselves.

For example, when Weinberg promotes a “realist” interpretation of quantum mechanics, in which “the wave function is the representative of physical reality,” he is implying that the artifacts theorists include in their models, such as quantum fields, are the ultimate ingredients of reality. In a 2012 Scientific American article theoretical physicist David Tong goes even further than Weinberg in arguing that the particles we actually observe in experiments are illusions and those physicists who say they are fundamental are disingenuous:

“Physicists routinely teach that the building blocks of nature are discrete particles such as the electron or quark. That is a lie. The building blocks of our theories are not particles but fields: continuous, fluidlike objects spread throughout space.”

This view is explicitly philosophical, and accepting it uncritically makes for bad philosophical thinking. Weinberg and Tong, in fact, are expressing a platonic view of reality commonly held by many theoretical physicists and mathematicians. They are taking their equations and model as existing on one-to-one correspondence with the ultimate nature of reality.

In the reputable online Stanford Encyclopedia of Philosophy, Mark Balaguer definesplatonism as follows:

“Platonism is the view that there exist [in ultimate reality] such things as abstract objects—where an abstract object is an object that does not exist in space or time and which is therefore entirely nonphysical and nonmental. Platonism in this sense is a contemporary view. It is obviously related to the views of Plato in important ways but it is not entirely clear that Plato endorsed this view as it is defined here. In order to remain neutral on this question, the term ‘platonism’ is spelled with a lower-case ‘p.’”

We will use platonism with a lower-case “p” here to refer to the belief that the objects within the models of theoretical physics constitute elements of reality, but these models are not based on pure thought, which is Platonism with a capital “P,” but fashioned to describe and predict observations.

Many physicists have uncritically adopted platonic realism as their personal interpretation of the meaning of physics. This not inconsequential because it associates a reality that lies beyond the senses with the cognitive tools humans use to describe observations.

In order to test their models all physicists assume that the elements of these models correspond in some way to reality. But those models are compared with the data that flow from particle detectors on the floors of accelerator labs or at the foci of telescopes (photons are particles, too). It is data—not theory—that decides if a particular model corresponds in some way to reality. If the model fails to fit the data, then it certainly has no connection with reality. If it fits the data, then it likely has some connection. But what is that connection? Models are squiggles on the whiteboards in the theory section of the physics building. Those squiggles are easily erased; the data can’t be.

In his Scientific American article Krauss reveals traces of platonic thinking in his personal philosophy of physics, writing:

“There is a class of philosophers, some theologically inspired, who object to the very fact that scientists might presume to address any version of this fundamental ontological issue. Recently one review of my book [A Universe from Nothing] by such a philosopher…. This author claimed with apparent authority (surprising because the author apparently has some background in physics) something that is simply wrong: that the laws of physics can never dynamically determine which particles and fields exist and whether space itself exists or more generally what the nature of existence might be. But that is precisely what is possible in the context of modern quantum field theory in curved spacetime.”

The direct, platonic, correspondence of physical theories to the nature of reality, as Weinberg, Tong and possibly Krauss have done, is fraught with problems: First, theories are notoriously temporary. We can never know if quantum field theory will not someday be replaced with another more powerful model that makes no mention of fields (or particles, for that matter). Second, as with all physical theories, quantum field theory is a model—a human contrivance. We test our models to find out if they work; but we can never be sure, even for highly predictive models like quantum electrodynamics, to what degree they correspond to “reality.” To claim they do is metaphysics. If there were an empirical way to determine ultimate reality, it would be physics, not metaphysics; but it seems there isn’t.

In the instrumentalist view we have no way of knowing what constitutes the elements of ultimate reality. In that view reality just constrains what we observe; it need not exist in one-to-one correspondence with the mathematical models theorists invent to describe those observations. Furthermore, it doesn’t matter. All these models have to do is describe observations, and they don’t need metaphysics to do that. The explanatory salience of our models may be the core of the romance of science but it plays second chair to its descriptive and predictive capacity. Quantum mechanics is a prime example of this because of its unambiguous usefulness despite lacking an agreed-on philosophical interpretation.

Thus, those who hold to a platonic view of reality are being disingenuous when they disparage philosophy. They are adopting the doctrine of one of the most influential philosophers of all time. That makes them philosophers, too.

Now, not all physicists who criticize philosophers are full-fledged platonists, although many skirt close to it when they talk about the mathematical elements of their models and the laws they invent as if they are built into the structure of the universe. Indeed, the objections of Weinberg, Hawking, Mlodinow, Krauss, and Tyson are better addressed to metaphysics and fail to show sufficient appreciation, in our view, for the vital contributions to human thought that persist in fields like ethics, aesthetics, politics and, perhaps most important, epistemology. Krauss pays these important topics some lip service, but not very enthusiastically.

Of course, Hawking and Mlodinow write mostly with cosmological concerns in mind—and where metaphysical attempts to grapple with the question of ultimate origins trespass on them, they are absolutely correct. Metaphysics and its proto-cosmological speculations, construed as philosophy, were in medieval times considered the handmaiden of theology. Hawking and Mlodinow are saying that metaphysicians who want to deal with cosmological issues are not scientifically savvy enough to contribute usefully. For cosmological purposes, armchair metaphysics is dead, supplanted by the more informed philosophy of physics, and few but theologians would disagree.

Krauss leveled his most scathing criticisms at the philosophy of science, and we suggest that it would have been more constructive had he targeted certain aspects of metaphysics. Andersen, for The Atlantic, interviewed him on whether physics has made philosophy and religion obsolete. And although it hasn’t done so for philosophy, it has for cosmological metaphysics (and the religious claims that depend on it, such as the defunct Kalām cosmological argument begging the necessity of a creator). Surely Krauss had metaphysical attempts to speculate about the universe at least partially in mind, given that the interview addressed his book on cosmology.

Whatever may be the branches of philosophy that deserve the esteem of academics and the public, metaphysics is not among them. The problem is straightforward. Metaphysics professes to be able to hook itself to reality—to legitimately describe reality—but there’s no way to know if it does.

So, although the prominent physicists we have mentioned, and the others who inhabit the same camp, are right to disparage cosmological metaphysics, we feel they are dead wrong if they think they have completely divorced themselves from philosophy. First, as already emphasized, those who promote the reality of the mathematical objects of their models are dabbling in platonic metaphysics whether they know it or not. Second, those who have not adopted platonism outright still apply epistemological thinking in their pronouncements when they assert that observation is our only source of knowledge.

Hawking and Mlodinow clearly reject platonism when they say, “There is no picture- or theory-independent concept of reality.” Instead, they endorse a philosophical doctrine they call model-dependent realism, which is “the idea that a physical theory or world picture is a model (generally of a mathematical nature) and a set of rules that connect the elements of the model to observations.” But they make it clear that “it is pointless to ask whether a model is real, only whether it agrees with observations.”

We are not sure how model-dependent realism differs from instrumentalism. In both cases physicists concern themselves only with observations and, although they do not deny that they are the consequence of some ultimate reality, they do not insist that the models describing those observations correspond exactly to that reality. In any case, Hawking and Mlodinow are acting as philosophers—epistemologists at the minimum—by discussing what we can know about ultimate reality, even if their answer is “nothing.”

All of the prominent critics of philosophy whose views we have discussed think very deeply about the source of human knowledge. That is, they are all epistemologists. The best they can say is they know more about science than (most) professional philosophers and rely on observation and experiment rather than pure thought—not that they aren’t philosophizing. Certainly, then, philosophy is not dead. That designation is more aptly applied to pure-thought variants like those that comprise cosmological metaphysics.

Extreme experiments to test space’s effect on the body

Source: http://www.bbc.com/future/story/20150505-the-numbers-that-lead-to-disaster
(Credit: DLR CC-BY 3.0)

(Credit: DLR CC-BY 3.0)

In the sublevels of a German building, volunteers will soon be lying at an unnatural angle, all to help better understand the effects of space travel on the human body. Richard Hollingham reports.

Ominously, most of the facility is indeed just beneath the surface.

Surrounded by utilitarian research labs and office blocks, Envihab resembles a flat rectangular white slab – like a giant block of Lego – resting on the ground.

(Credit: Science Photo Library)

Sleeping in weightlessness has long been an issue for astronauts (Credit: Science Photo Library)

We enter along a concrete pathway cut through the grassy bank. A security guard opens the glass doors and we descend a staircase into a maze of corridors lined with white walls, white floors and ceilings. There are no windows or pictures, no soft furnishings or colour; nor are there any handles on the doors.

Once inside we could be anywhere on Earth… or, indeed, space. Perhaps an outpost of humanity on a distant world. Or the set of a 1970s episode of Doctor Who. I half expect to see a patrol of Daleks trundling around the corner.

Test for space

This feeling of other-worldliness is deliberate. Envihab is designed to feel like a space station, where scientists, doctors and engineers can simulate the environment beyond the Earth.

“We can control all the environmental conditions – noise, light, temperature, even the mix of gases in the air,” says Ulrich Limper, a cardiologist for DLR’s Institute of Aerospace Medicine, and my guide to the facility.

“It gives us the opportunity to do very controlled studies which are important for spaceflight, but also for science on Earth as well.”

(Credit: Esa)

Angling the head down six degrees mimics the effects of weightlessness (Credit: Esa)

Limper uses his key fob to open a set of double doors, which swing open with only the faintest mechanical whirr. We travel through a further set and enter a hospital ward. The white corridors are lined with 12 windowless rooms, each containing a single hospital bed.

“From inside here, there’s no way of knowing whether it’s day or night,” says Limper. “There are no clocks on the walls, and you have no idea what time it is, what season it is.”

But before this gets too nightmarish, Limper is keen to stress that everyone who stays in this facility is a paid volunteer. “We select them during a structured and intense screening process and they know what the challenge will be.”

Simulated weightlessness

In the first major study to be carried out in Envihab, the challenge will be to lie in bed for 60 days in a row to study the effects of long duration spaceflight. The experiment starts this summer and the medical team is currently in the process of selecting 12 participants.

“These bed rest studies give us the opportunity to simulate the effects of prolonged weightlessness,” explains Limper.

Understanding how the body adapts to life without gravity is essential if humans are ever to leave the Earth for more than a few months. A mission to Mars and back – assuming you plan to land – would take at least two years, with at least 18 months of that in space.

(Credit: Science Photo Library)

Astronauts have complained the effects of weightlessness feel like a head cold (Credit: Science Photo Library)

Studies carried out on astronauts have shown that lack of gravity leads to bone loss and muscle deterioration and causes bodily fluids to pool in the head, making it swell (astronauts really are big-headed). Crew on the International Space Station (ISS) frequently report the feeling of a head cold.

US astronaut Scott Kelly and Russian cosmonaut Mikhail Kornienko have recently begun a year-long mission to investigate the effects of long-duration space flight. However, bed rest studies on the ground in Envihab will enable doctors to examine a larger number of subjects under completely controlled conditions.

But do not imagine it is going to be any more comfortable.

Life at an angle

“To cheat gravity, we tilt the subjects head-down by six degrees,” says Limper. “This is very important, so that the head is below the rest of the body.”

Stuck at this peculiar angle, the volunteers will also be expected to eat a nutritionally controlled diet and go to the toilet using bedpans and urine bottles. They will be monitored 24 hours a day on close-circuit TV and even be transferred to special water-proof tilted beds to take a shower.

Dlr has carried out several previous bed rest studies in partnership with the European Space Agency. Results from these studies have also been used on Earth in studies of bone disease and to plan care for people confined to bed in hospital.

The latest research requires relatively fit men aged between 18 and 40. “A big problem for us is the variability between subjects,” says Limper, “and gender has a big effect but other investigations might use women.”

As well as spending life at an angle of -6 degrees – reading, sleeping, watching movies or playing video games – the subjects will also take part in a range of experiments. Over the three weeks inside the facility, they can expect to be measured, prodded, pricked and scanned in tests of blood, bone and muscle.

(Credit: Felix Barsnick)

The volunteers will spend three weeks inside the facility – in rooms below ground (Credit: Felix Barsnick)

There will, however, be some respite from the regime of rest and test. As one of the purposes of the study is to investigate ways of countering the debilitating effects of microgravity, participants will be given specific exercises to see the effects on muscle tone and health.

Future studies will also employ a device located at the heart of Envihab: a human centrifuge. Contained within a large white (windowless) cylinder, it consists of four arms, around three metres long, arranged in a cross about a central axis. One of the arms is fitted with a bed, so doctors can spin volunteers to simulate varying accelerations.

It is deliberately smaller than most human centrifuges. “We think this is more or less the size we could implement on a space station,” says Limper.

Difficult transition

The aim would be to carry a human centrifuge on a future space station or deep space mission to enable astronauts to exercise under artificial gravity conditions.

Anyone signing up for two months in bed, being subjected to whatever the doctors can throw at them, is going to feel disorientated by the end of the study. To help them cope with the transition back to the real world, the facility includes a “living quarters” area where the volunteers will live and socialise once the bed rest phase of the experiment is over.

It still feels far from home. Although fitted out with comfortable chairs (you can only imagine what it would be like to finally sit down) and a massive video screen, there are no windows or pictures on the walls. The only decoration is a DLR logo on the wall and a red fire extinguisher in the corner.

(Credit: Getty Images)

(Credit: Getty Images)

Who would have thought doing nothing could be so difficult? Yet, despite the challenge of signing up for 60 days in a bed, Limper says that – thanks to careful selection – volunteers seldom drop out.

Even after only 90 minutes underground I have lost all track of time. As we emerge into the daylight, I ask Limper if he would take part in his own experiments.

“I wouldn’t do it,” he replies. “No.”

Five factors that will decide if Philae wakes

Source: http://www.nature.com/news/five-factors-that-will-decide-if-philae-wakes-1.17488

ESA/Rosetta/Philae/CIVA

The first image that Philae sent back from the comet surface.

Ever since the European Space Agency’s Philae lander ran out of batteries on 15 November, just three days after it bounced on to comet 67P/Churyumov–Gerasimenko, scientists have consoled themselves with the hope that the craft is not dead, just sleeping.

Stuck in a shaded area beneath a rocky-looking outcrop on the comet, Philae has been unable to charge its solar-powered battery. But as 67P gets nearer to the Sun, scientists hope that the lander could receive enough solar power to reboot and transmit a signal to the orbiting Rosetta probe. Lander scientists predict that, from 8 to 17 May, Rosetta will be in a good position to hear Philae’s signals — if the lander is awake. Nature takes a look at five factors that will decide whether Philae wakes or sleeps on.

1. Chilly nights

Only once Philae’s thermostat registers that its internal temperature is higher than −45 °C will the lander check to see whether it has enough power to reboot. Reaching that temperature is Philae’s greatest concern, says Stephan Ulamec, lander project manager at the German Aerospace Centre (DLR) near Cologne.

In November, Philae clocked outside temperatures of less than −160 °C. By now, the area around the lander may have warmed by around 40 or 50 °C, says Ulamec. Philae is also heated by its solar absorbers, which maximise the warmth gained from sunlight, and by its solar panels, which generate electricity and turn it directly into heat. The craft is insulated too. But as the comet rotates, Philae experiences only 80-minute bursts of sunlight, and then falls into more than 11 hours of darkness. So if its environment remains very cold, the lander may be fighting a losing battle.

Even if Philae does wake using the power it generates during sunlight hours, a chilly environment will prevent its battery from being able to store energy for darker times; it is designed to recharge only above 0 °C. The team would consider attempting to charge it at lower temperatures, but is currently making plans that assume the lander will only operate during bursts of sunshine, says Valentina Lommatsch, a member of the lander team at the DLR.

2. Dark shadows

Shadows cast by the terrain around Philae could reduce the solar power and heat that the lander receives, says Lommatsch. Her team has good estimates of Philae’s position and orientation, but knows less about the shape of surrounding outcrops, whose shadows will shift as the comet moves closer to the Sun.

Philae is also tilted, which means that even though the Sun is becoming higher in the sky over the comet, the amount of sunlight hitting solar absorbers on the lander’s lid is diminishing. According to some reconstructions, sunlight will cease to hit the absorbers in June, says Cinzia Fantinati, Philae operations manager at the DLR, although the light hitting solar panels on the lander’s sides may increase. In an effort to maximize the lander’s exposure to sunlight, her team’s last command to Philae before hibernation was for it to rotate and rise on its legs, bringing its largest solar panel into the light. But because this happened during hours of darkness, they could not gauge its success, says Lommatsch.

3. Cracking components

Before Rosetta launched, Philae’s components passed tests down to a temperature of −60 °C. But during the coldest nights on the comet, the lander’s internal temperature is likely to have dropped to around −140 °C, says Ulamec. No one knows how low Philae’s temperature can go, but Ulamec fears that materials contracting under the cold could cause soldering points to snap. The rechargeable battery is particularly vulnerable.

4. Dust up

As 67P heats up, jets of sublimating gas on the comet also throw up increasing amounts of dust, some of which will resettle on the comet floor. Dust could have covered the solar panels and stopped them from working. On the plus side, Ulamec says, data from the lander’s first three days suggest that Philae’s cubbyhole seems to contain little dust. Also, the comet’s gas jets are unlikely to be powerful enough to shift Philae’s position, he says.

5. A question of timing

The most tantalizing possibility is that Philae is already awake, but either does not have enough power to communicate or is already transmitting a signal that Rosetta has not been in a position to hear. If the lander cannot charge its battery, transmissions will come only during the 80 minutes of sunlight that the lander experiences every 12.4 hours.

For the best chance of hearing those signals, Rosetta must be less than 300 kilometres from Philae, on the same side of the comet, and with both crafts’ antennas broadly aligned. From 8 to 17 May, there should be at least 10 opportunities for contact, says Fantinati. There will be chances after that, too, although Rosetta’s flight plan is currently uncertain after a navigation problem caused by comet activity forced it to change orbits in March.

Even if silence continues after the comet’s closest approach to the Sun — on 13 August — that will not be a reason to give up, adds Lommatsch. “Something could still move and Philae could receive a lot more light,” she says. “We’ll have to keep waiting and hoping.”

Spooky Quantum Action Might Hold the Universe Together

Source: http://www.wired.com/2015/05/spooky-quantum-action-might-hold-universe-together/

Tensor networks could connect space-time froth to quantum information.Tensor networks could connect space-time froth to quantum information.

Hubble Finds Giant Halo Around the Andromeda Galaxy

Source: http://phys.org/news/2015-05-hubble-giant-halo-andromeda-galaxy.html

Hubble Finds Giant Halo Around the Andromeda Galaxy
This diagram shows how scientists determined the size of the halo of the Andromeda galaxy. Because the gas in the halo is dark, the team measured it by using the light from quasars, the very distant bright cores of active galaxies powered by …more

Scientists using NASA’s Hubble Space Telescope have discovered that the immense halo of gas enveloping the Andromeda galaxy, our nearest massive galactic neighbor, is about six times larger and 1,000 times more massive than previously measured. The dark, nearly invisible halo stretches about a million light-years from its host galaxy, halfway to our own Milky Way galaxy. This finding promises to tell astronomers more about the evolution and structure of majestic giant spirals, one of the most common types of galaxies in the universe.

“Halos are the gaseous atmospheres of galaxies. The properties of these gaseous halos control the rate at which stars form in galaxies according to models of galaxy formation,” explained the lead investigator, Nicolas Lehner of the University of Notre Dame, Indiana. The gargantuan halo is estimated to contain half the mass of the stars in the Andromeda galaxy itself, in the form of a hot, diffuse gas. If it could be viewed with the naked eye, the halo would be 100 times the diameter of the full Moon in the sky. This is equivalent to the patch of sky covered by two basketballs held at arm’s length.

The Andromeda galaxy, also known as M31, lies 2.5 million light-years away and looks like a faint spindle, about 6 times the diameter of the full Moon. It is considered a near-twin to the Milky Way galaxy.

Because the gas in Andromeda’s halo is dark, the team looked at bright background objects through the gas and observed how the light changed. This is a bit like looking at a glowing light at the bottom of a pool at night. The ideal background “lights” for such a study are quasars, which are very distant bright cores of active galaxies powered by black holes. The team used 18 quasars residing far behind Andromeda to probe how material is distributed well beyond the visible disk of the galaxy. Their findings were published in the May 10, 2015, edition of The Astrophysical Journal.

Earlier research from Hubble’s Cosmic Origins Spectrograph (COS)-Halos program studied 44 distant galaxies and found halos like Andromeda’s, but never before has such a massive halo been seen in a neighboring galaxy. Because the previously studied galaxies were much farther away, they appeared much smaller on the sky. Only one quasar could be detected behind each faraway galaxy, providing only one light anchor point to map their halo size and structure. With its close proximity to Earth and its correspondingly large footprint on the sky, Andromeda provides a far more extensive sampling of a lot of background quasars.

“As the light from the quasars travels toward Hubble, the halo’s gas will absorb some of that light and make the quasar appear a little darker in just a very small wavelength range,” explains co-investigator J. Christopher Howk, also of Notre Dame. “By measuring the dip in brightness in that range, we can tell how much halo gas from Andromeda there is between us and that quasar.”

The scientists used Hubble’s unique capability to study the ultraviolet light from the quasars. Ultraviolet light is absorbed by Earth’s atmosphere, which makes it difficult to observe with a ground-based telescope. The team drew from about 5 years’ worth of observations stored in the Hubble data archive to conduct this research. Many previous Hubble campaigns have used quasars to study gas much farther away than—but in the general direction of—Andromeda, so a treasure trove of data already existed.

But where did the giant halo come from? Large-scale simulations of galaxies suggest that the halo formed at the same time as the rest of Andromeda. The team also determined that it is enriched in elements much heavier than hydrogen and helium, and the only way to get these heavy elements is from exploding stars called supernovae. The supernovae erupt in Andromeda’s star-filled disk and violently blow these heavier elements far out into space. Over Andromeda’s lifetime, nearly half of all the heavy elements made by its stars have been expelled far beyond the galaxy’s 200,000-light-year-diameter stellar disk.

What does this mean for our own galaxy? Because we live inside the Milky Way, scientists cannot determine whether or not such an equally massive and extended halo exists around our galaxy. It’s a case of not being able to see the forest for the trees. If the Milky Way does possess a similarly huge , the two galaxies’ halos may be nearly touching already and quiescently merging long before the two massive galaxies collide. Hubble observations indicate that the Andromeda and Milky Way will merge to form a beginning about 4 billion years from now.

Astronomers unveil the farthest galaxy

Source: http://phys.org/news/2015-05-astronomers-unveil-farthest-galaxy.html
Astronomers unveil the farthest galaxy
The galaxy EGS-zs8-1 sets a new distance record. It was discovered in images from the Hubble Space Telescope’s CANDELS survey. Credit: NASA, ESA, P. Oesch and I. Momcheva (Yale University), and the 3D-HST and HUDF09/XDF teams

An international team of astronomers led by Yale University and the University of California-Santa Cruz have pushed back the cosmic frontier of galaxy exploration to a time when the universe was only 5% of its present age.

The team discovered an exceptionally luminous galaxy more than 13 billion years in the past and determined its exact distance from Earth using the powerful MOSFIRE instrument on the W.M. Keck Observatory’s 10-meter telescope, in Hawaii. It is the most distant galaxy currently measured.

The galaxy, EGS-zs8-1, was originally identified based on its particular colors in images from NASA’s Hubble and Spitzer space telescopes. It is one of the brightest and most massive objects in the early universe.

Age and distance are vitally connected in any discussion of the universe. The light we see from our Sun takes just eight minutes to reach us, while the light from distant we see via today’s advanced telescopes travels for billions of years before it reaches us—so we’re seeing what those galaxies looked like billions of years ago.

“It has already built more than 15% of the mass of our own Milky Way today,” said Pascal Oesch, a Yale astronomer and lead author of a study published online May 5 in Astrophysical Journal Letters. “But it had only 670 million years to do so. The universe was still very young then.” The new distance measurement also enabled the astronomers to determine that EGS-zs8-1 is still forming stars rapidly, about 80 times faster than our galaxy.

Only a handful of galaxies currently have accurate distances measured in this very early universe. “Every confirmation adds another piece to the puzzle of how the first generations of galaxies formed in the early universe,” said Pieter van Dokkum, the Sol Goldman Family Professor of Astronomy and chair of Yale’s Department of Astronomy, who is second author of the study. “Only the largest telescopes are powerful enough to reach to these large distances.”

The MOSFIRE instrument allows astronomers to efficiently study several galaxies at the same time. Measuring galaxies at extreme distances and characterizing their properties will be a major goal of astronomy over the next decade, the researchers said.

The new observations establish EGS-zs8-1 at a when the universe was undergoing an important change: The hydrogen between galaxies was transitioning from a neutral state to an ionized state. “It appears that the young stars in the early galaxies like EGS-zs8-1 were the main drivers for this transition, called reionization,” said Rychard Bouwens of the Leiden Observatory, co-author of the study.

Taken together, the new Keck Observatory, Hubble, and Spitzer observations also pose new questions. They confirm that massive galaxies already existed early in the history of the , but they also show that those galaxies had very different physical properties from what is seen around us today. Astronomers now have strong evidence that the peculiar colors of early galaxies—seen in the Spitzer images—originate from a rapid formation of massive, young stars, which interacted with the primordial gas in these galaxies.

The observations underscore the exciting discoveries that are possible when NASA’s James Webb Space Telescope is launched in 2018, note the researchers. In addition to pushing the cosmic frontier to even earlier times, the telescope will be able to dissect the galaxy light of EGS-zs8-1 seen with the Spitzer telescope and provide with more detailed insights into its gas properties.

“Our current observations indicate that it will be very easy to measure accurate distances to these distant galaxies in the future with the James Webb Space Telescope,” said co-author Garth Illingworth of the University of California-Santa Cruz. “The result of JWST’s upcoming measurements will provide a much more complete picture of the formation of galaxies at the cosmic dawn.”

Astrophysicists offer proof that famous image shows forming planets

Source: http://phys.org/news/2015-05-astrophysicists-proof-famous-image-planets.html

by Don Campbell

U of T astrophysicists offer proof that famous image shows forming planets
This image sparked scientific debate when it was released last year, with researchers arguing over whether newly forming planets were responsible for gaps in the dust and gas swirling around the young star. Credit: Atacama Large Millimeter/submillimeter Array (ALMA)

A recent and famous image from deep space marks the first time we’ve seen a forming planetary system, according to a study by University of Toronto astrophysicists.

The team, led by Daniel Tamayo from the Centre for Planetary Science at U of T Scarborough and the Canadian Institute for Theoretical Astrophysics, found that circular gaps in a disk of dust and gas swirling around the young star HL Tau are in fact made by forming .

“HL Tau likely represents the first image taken of the initial locations of planets during their formation,” says Tamayo. “This could be an enormous step forward in our ability to understand how planets form.”

The image of HL Tau, taken in October 2014 by the state-of-the-art Atacama Large Millimeter/submillimeter Array (ALMA) located in Chile’s Atacama Desert, sparked a flurry of scientific debate.

While those who observed the original image claimed that planets were most likely responsible for carving the gaps, some remained skeptical. It had been suggested that the gaps, especially the outer three, could not represent forming planets because they are so close together. It was argued that planets massive enough to carve such gaps should be scattered violently by the force of gravity and ejected from the system early on in its development.

But Tamayo’s study is the first to suggest the gaps are evidence of because the gaps are separated by amounts consistent with what’s called a special resonant configuration. In other words, these planets avoid violent collisions with each other by having specific orbital periods where they miss each other, similar to how Pluto has avoided Neptune for billions of years despite the two orbits crossing one another.

Tamayo created two videos to show how HL Tau would appear in both resonant and non-resonant configurations.

The system can be much more stable in a resonant configuration and it’s a natural state for planets in the HL Tau system to migrate to says Tamayo.

The HL Tau system is less than a million years old, about 17.9 billion kilometres in radius and resides 450 light years from Earth in the constellation Taurus.

Since young systems like HL Tau are shrouded by a thick cloud of gas and dust, they can’t be observed using visible light. ALMA resolves that issue by using a series—or an array—of telescopes located 15 kilometres apart that use much longer wavelengths. The result is unprecedented access to high resolution images that Tamayo says will continue to revolutionize the study of planetary formation.

“We’ve discovered thousands of planets around other stars and a big surprise is that many of the orbits are much more elliptical than those found in our solar system” said Tamayo.

This and future ALMA discoveries may be the key to connecting these discovered planets to their original birth locations.

While the HL Tau system remains stable in its relatively young age, Tamayo says over billions of years it will act as a “ticking time bomb.” Eventually the planets will scatter, ejecting some and leaving the remaining bodies on elliptical orbits like the ones found around older stars.

Our solar system does not seem to have undergone such a dramatic scattering event, notes Tamayo. Future observations could also go a long way in determining whether our solar system is typical or an oddity ideally suited for life.

“If further observations show these to be the typical starting conditions around other stars, it would reveal our to be a remarkably special place,” says Tamayo.

Quote
Source: http://phys.org/news/2015-05-multicolor-meta-hologram-entire-visible-spectrum.html#jCp

by Lisa Zyga

(Phys.org)—There are many different ways to generate a hologram, each with its own advantages and disadvantages. Trying to maximize the advantages, researchers in a new study have designed a hologram made of a metamaterial consisting of aluminum nanorods that can produce light across the entire visible spectrum, and do so in a way that yields brighter images than other methods.
The researchers, led by Din Ping Tsai at National Taiwan University and Academia Sinica, both in Taipei, Taiwan, have published a paper on the new hologram in a recent issue of Nano Letters.
As the researchers explain, multicolor holograms have existed for many years and are often used on credit cards and for other security purposes. These “rainbow holograms” mix red, blue, and green light under white light illumination to produce a variety of colors. The main drawback, however, is that a viewer sees different colors depending on the viewing angle, which has limited the applications of these holograms.
More recently, researchers have demonstrated that an alternative way to generate multicolor holograms involves metamaterials—man-made materials composed of repeating patterns of small structures, allowing their optical properties to be tuned. Holograms made of metamaterials are called “meta-holograms.”
In general, holograms use either amplitude modulation or phase modulation of light waves to achieve the holographic effect. In the new study, the researchers explain that phase modulation is more desirable because it produces a brighter image. However, so far phase-modulation-based multicolor meta-holograms haven’t been successfully achieved. This is because phase modulation with the gold and silver materials that are typically used in meta-holograms simply cannot extend across the entire visible spectrum due to the properties of the gold and silver materials.
In the new paper, the researchers built the metamaterial from nanorods made of aluminum, which does not suffer from the same limitations as gold and silver, and so can produce light across the entire visible spectrum. The new method is the first demonstration of a phase-modulated, full-color meta-hologram made of aluminum nanorods.
The nanorods have different lengths (50 to 150 nm), with longer rods resonating at longer wavelengths of light to produce the full spectrum of colors. The technique can also project different images to different locations on the display surface.
“Compared to the meta-holograms in the literature, our proposed meta-hologram consisting of low-cost and mass-producible aluminum has polarization-switchable and color-multiplexing images that cannot be demonstrated by the widely used metals, such as gold and silver,” Tsai told Phys.org.
The researchers expect that the technique can be adapted to generate 3D images by using cross nanorods that consist of two sets of perpendicular aluminum rods, each of which produces a single image but with a different polarization. The dual images could have applications in glasses-free 3D imaging and data storage.
“Our future plans aim to enhance the efficiency of the reported meta-hologram and demonstrate a multi-dimensional meta-hologram which is capable of reconstructing polarization-dependent color images on different focal planes,” Tsai said.

via Multicolor meta-hologram produces light across entire visible spectrum.

What would you see in a black hole?

Source: http://www.bbc.com/future/story/20150501-what-youd-see-in-a-black-hole

Something about a black hole just pulls you in. Sure, its gravity is so strong that not even light can elude its grasp. But, there’s something else, something harder to pinpoint. Maybe it’s a black hole’s absolute darkness, a mysterious, infinite chasm that dares you – or even compels you – to venture closer.

A trip into a black hole is a one-way journey. Once you cross the event horizon – the point at which light can’t escape – there’s no turning back. Most likely, you’d die a violent death. If you’re not deterred, let’s at least explore what we might see if we were to visit one.

(Credit: Science Picture Library)

(Credit: Science Picture Library)

When a massive star exhausts its fuel, it collapses under its own weight and implodes into a black hole. Only stars with enough heft – those maybe about 25 times more massive than our Sun – will create one. About one out of every thousand stars in the galaxy is massive enough to make a black hole. The Milky Way has at least 100 billion stars, which means about 100 million black holes are lurking out there in the galaxy. But remember, space is big. Even if you’re travelling at the speed of light, the nearest black hole will still take a few thousand years to reach.

But let’s say you master interstellar travel, whether via warp drive orwormholes, and you reach one of these black holes. What will you see?

Circling the drain

Well, nothing really. A lone black hole is, unsurprisingly, black. If you circle around it, you’ll notice that it’s spherical, unlike those flat, Acme portable holes in Road Runner cartoons. And if it’s spinning – which is likely, as most things in the universe rotate to some degree – then the black hole will be wider around the middle, rather than a perfect circle.

For a more dramatic view, though, zip on over to the Milky Way’s centre, the home of a supermassive black hole nearly four million times more massive than the Sun. The black hole’s gravity has gathered lots of gas and dust, which has accumulated into a disc that’s spiralling into the hole – circling the drain, so to speak. As the material gets consumed, friction heats it up to billions of degrees, producing lots of radiation, and outflows of energy and charged particles.

(Credit: Science Picture Library)

The enormous mass of black holes means light cannot escape the pull of their gravity (Credit: Science Picture Library)

The hot disc would be quite a sight. As for the black hole itself, you wouldn’t be able to see it directly, as it’s enshrouded in gas and dust. But, you can see how the black hole’s gravity bends and warps rays of light around it, creating a visual imprint in the surrounding material called the black hole’s shadow. The gravity warps the image of the shadow itself, making it appear about five times bigger than the black hole.

Normally, we think of light travelling in straight beams – photons zooming inexorably forward. But near a black hole, the powerful gravity tugs on photons, swinging them around the hole into orbits. Some of those photons manage to escape and reach your eyes (or telescope), and what you’d see is a bright ring bordering the shadow.

Meanwhile, the inner part of the disc of material swirls around the black hole at speeds approaching that of light. According to Einstein’s theory of relativity, a light source will appear brighter if it’s hurtling toward you. If you’re looking at the black hole such that the disc is somewhat edge-on, the part of the disc that’s moving toward you will glow much brighter, producing a bright crescent on that side of the black hole.

Bright and clear

So while you can’t see the black hole directly, you’d see its shadow, surrounded by a bright ring and crescent. Some researchers worried that some of the gas, dust, and charged particles spewing out of the disc might obscure this dramatic image. To envision exactly what the black hole’s shadow would look like, researchers created some of the most accurate computer simulations yet that incorporate all the physics of the gas and gravity around the black hole.

It turns out that the view would remain bright and clear, says Feryal Ozel, an astrophysicist at the University of Arizona who helped produce the simulations. They make for some cool movies but more importantly, they help astronomers anticipate what they will see when they observe the shadow of the Milky Way’s black hole for real.

(Credit: Getty Images)

(Credit: Getty Images)

By combining the powers of up to 11 existing telescopes around the world, astronomers are creating one huge, Earth-sized instrument to actually see, for the first time, the black hole’s shadow and its characteristic crescent and ring. “That’s my hope and dream, that we’ll see a ring that’s brighter on one side than the other,” Ozel says.

This planet-sized telescope, called the Event Horizon Telescope, will include instruments from the South Pole to Chile, using supercomputers to process the vast chunks of data. “That gives us a higher level of magnification of any telescope that’s ever been built,” says Shep Doeleman, an astronomer at the Massachusetts Institute of Technology (MIT) who’s leading the EHT project. Resolving the black hole’s shadow from Earth is equivalent to spotting a grapefruit on the Moon, he says.

Smashed into smithereens

This spring, the researchers got seven telescopes hooked up and ready to go. By 2017, he hopes to have all of them set up, and people will be able to directly see a black hole. Indeed, getting an image is groundbreaking, he says, and it’ll offer the strongest proof yet that black holes do exist (all evidence so far has been indirect, for example based on a black hole’s gravitational influence of nearby stars at the galactic centre). Physicists will also be able to make the most detailed observations ever on what goes on around a black hole, allowing them to test the intricate details of Einstein’s theory of gravity.

But maybe simply a view isn’t enough, and you still want to go inside the black hole. Unfortunately, physicists aren’t quite sure what will happen. The conventional hypothesis is that you’d get spaghettified. If you leap into the black hole feet first, your feet will feel stronger gravity than your head. As you approach the hole, the difference in gravity at your feet and your head gets bigger and bigger until you’re ripped apart. Soon, this tidal gravity, as it’s called, will tear every cell, molecule, and atom in your body into smithereens.

(Credit: Science Picture Library)

Could the event horizon be a giant wall of fire? (Credit: Science Picture Library)

According to the maths, if the black hole is relatively small – a few tens of the mass of the sun – then, spaghettification will happen long before you cross the event horizon, the point at which light can no longer escape the black hole’s gravity. If the black hole is enormous, several billion times the mass of the Sun, then you would cross the event horizon just fine. Spaghettification awaits later.

But in 2012, while trying to understand whether information disappears into a black hole forever, John Polchinski and other physicists realised another fate was possible. According to quantum mechanics, they say, the event horizon becomes a giant wall of fire that incinerates you once you cross it. You don’t even get the chance of being spaghettified.

Many physicists didn’t like this idea. According to one tenet of Einstein’s relativity, a person falling through the event horizon shouldn’t feel anything different, just floating in space. A firewall, then, would violate the “equivalence principle”, a venerable rule that physicists would be loath to discard so easily. They have thus tried idea after idea to resolve what’s become known as the firewall paradox. No one’s agreed on anything just yet.

To find out once and for all what happens inside a black hole, you might simply have to go inside one. The problem is you won’t then be able to tell anyone what you’ve seen.

Is the universe a hologram?

Vienna University of Technology. “Is the universe a hologram?.” ScienceDaily. ScienceDaily, 27 April 2015. <www.sciencedaily.com/releases/2015/04/150427101633.htm>.
Is our universe a hologram?
Credit: TU Wien

At first glance, there is not the slightest doubt: to us, the universe looks three dimensional. But one of the most fruitful theories of theoretical physics in the last two decades is challenging this assumption. The “holographic principle” asserts that a mathematical description of the universe actually requires one fewer dimension than it seems. What we perceive as three dimensional may just be the image of two dimensional processes on a huge cosmic horizon.

Up until now, this principle has only been studied in exotic spaces with negative curvature. This is interesting from a theoretical point of view, but such spaces are quite different from the space in our own universe. Results obtained by scientists at TU Wien (Vienna) now suggest that the holographic principle even holds in a flat spacetime.

The Holographic Principle

Everybody knows holograms from credit cards or banknotes. They are two dimensional, but to us they appear three dimensional. Our universe could behave quite similarly: “In 1997, the physicist Juan Maldacena proposed the idea that there is a correspondence between gravitational theories in curved anti-de-sitter spaces on the one hand and quantum field theories in spaces with one fewer dimension on the other,” says Daniel Grumiller (TU Wien).

Gravitational phenomena are described in a theory with three spatial dimensions, the behaviour of quantum particles is calculated in a theory with just two spatial dimensions — and the results of both calculations can be mapped onto each other. Such a correspondence is quite surprising. It is like finding out that equations from an astronomy textbook can also be used to repair a CD-player. But this method has proven to be very successful. More than ten thousand scientific papers about Maldacena’s “AdS-CFT-correspondence” have been published to date.

Correspondence Even in Flat Spaces

For theoretical physics, this is extremely important, but it does not seem to have much to do with our own universe. Apparently, we do not live in such an anti-de-sitter-space. These spaces have quite peculiar properties. They are negatively curved, any object thrown away on a straight line will eventually return. “Our universe, in contrast, is quite flat — and on astronomic distances, it has positive curvature,” says Daniel Grumiller.

However, Grumiller has suspected for quite some time that a correspondence principle could also hold true for our real universe. To test this hypothesis, gravitational theories have to be constructed, which do not require exotic anti-de-sitter spaces, but live in a flat space. For three years, he and his team at TU Wien (Vienna) have been working on that, in cooperation with the University of Edinburgh, Harvard, IISER Pune, the MIT and the University of Kyoto. Now Grumiller and colleagues from India and Japan have published an article in the journal Physical Review Letters, confirming the validity of the correspondence principle in a flat universe.

Calculated Twice, Same Result

“If quantum gravity in a flat space allows for a holographic description by a standard quantum theory, then there must be physical quantities, which can be calculated in both theories — and the results must agree,” says Grumiller. Especially one key feature of quantum mechanics -quantum entanglement — has to appear in the gravitational theory.

When quantum particles are entangled, they cannot be described individually. They form a single quantum object, even if they are located far apart. There is a measure for the amount of entanglement in a quantum system, called “entropy of entanglement.” Together with Arjun Bagchi, Rudranil Basu and Max Riegler, Daniel Grumiller managed to show that this entropy of entanglement takes the same value in flat quantum gravity and in a low dimension quantum field theory.

“This calculation affirms our assumption that the holographic principle can also be realized in flat spaces. It is evidence for the validity of this correspondence in our universe,” says Max Riegler (TU Wien). “The fact that we can even talk about quantum information and entropy of entanglement in a theory of gravity is astounding in itself, and would hardly have been imaginable only a few years back. That we are now able to use this as a tool to test the validity of the holographic principle, and that this test works out, is quite remarkable,” says Daniel Grumiller.

This however, does not yet prove that we are indeed living in a hologram — but apparently there is growing evidence for the validity of the correspondence principle in our own universe.


Story Source:

The above story is based on materials provided by Vienna University of Technology. Note: Materials may be edited for content and length.


Journal Reference:

  1. Arjun Bagchi, Rudranil Basu, Daniel Grumiller, Max Riegler.Entanglement Entropy in Galilean Conformal Field Theories and Flat Holography. Physical Review Letters, 2015; 114 (11) DOI:10.1103/PhysRevLett.114.111602

Dark Matter Is Necessary For The Origin Of Life

Source: http://www.forbes.com/sites/ethansiegel/2015/04/29/dark-matter-is-necessary-for-the-origin-of-life/

When you look up past the stars of our Milky Way and out at the galaxies beyond, it might surprise you to learn that most of what we see isn’t most of what’s actually there. Sure, in our Solar System, 99.8% of the mass is in our Sun, and astronomy has taught us a tremendous amount about how stars work. So you might think that if you measure all the starlight — of all different types and wavelengths — coming from each individual galaxy we observe, we can figure out how much mass is in there.

On the other hand, we know how the laws of gravitation work, and how the motions of gravitationally bound objects depend wholly on the total mass of the system and how that mass is distributed. So we can look both at individual galaxies and how the stars within them orbit, as well as how entire galaxies move within giant galactic clusters. When we make all of these measurements, we find a shocking fact: the measurement of mass from light and the measurement of mass from gravitation are off from one another by a factor of 50.

Image credit: M. Cappellari and the Sloan Digital Sky Survey.

Now, we’ve discovered lots of other types of matter in the Universe besidesstars, including:

  • stellar remnants like white dwarfs, neutron stars and black holes,
  • asteroids, planets and other objects with masses too low (like brown dwarfs) to become stars,
  • neutral gas both within galaxies and in the space between them,
  • light-blocking dust and nebulous regions,
  • and ionized plasma, found mostly in the intergalactic medium.

All of these forms of normal matter — or matter originally made of the same things we are: protons, neutrons and electrons — do in fact contribute to what’s there, with gas and plasma in particular contributing more than even stars to. But even that only gets us up to about 15-to-17% of the total amount of matter we need to explain gravitation. For the rest of it, we need a new form of matter that isn’t just different from protons, neutrons and electrons, but that doesn’t match up with any of the known particles in the Standard Model. We need some type of dark matter.

Images credit: X-ray: NASA/ CXC/UVic./A.Mahdavi et al. Optical/Lensing: CFHT/UVic./A.Mahdavi et al. (top left); X-ray: NASA/CXC/UCDavis/W.Dawson et al.; Optical: NASA/STScI/UCDavis/ W.Dawson et al. (top right); ESA/XMM-Newton/F. Gastaldello (INAF/IASF, Milano, Italy)/CFHTLS (bottom left); X-ray: NASA, ESA, CXC, M. Bradac (University of California, Santa Barbara), and S. Allen (Stanford University) (bottom right). These colliding galaxy clusters show a clear separation between the normal matter (in pink) and the gravitational effects (in blue).

But what might surprise you is that we don’t just need dark matter to explain galactic rotation, cluster motions and collisions, but to explain the origin of life itself! To understand why, all you need to remember is that the Universe began from a hot, dense state — the hot Big Bang — where everything started off as a mostly uniform sea of individual, free, high-energy particles. As the Universe expands and cools, we can form protons, neutrons, and the lightest nuclei (hydrogen, deuterium, helium and a trace amount of lithium), but nothing else. It isn’t until tens or even hundreds of millions of years later that matter will collapse into dense enough regions to form stars and what will eventually become galaxies.

All of this will happen just fine, albeit differently in detail, whether there were plenty of dark matter or none at all. But in order to make the elements necessary for life in great abundance — elements like carbon, oxygen, nitrogen, phosphorous and sulphur — they need to be forged in the cores of the most massive stars in the Universe. They do us no good in there, though; in order to enable the creation of rocky planets, organic molecules and (eventually) life, they need to eject those heavier atoms back into the interstellar medium, where they can be recycled into future generations of stars. To do that, we need a supernova explosion.

Image credit: NASA / JPL-Caltech / O. Krause et al., combining Hubble (visible), Spitzer (IR) and Chandra (X-ray) data.Image credit: NASA / JPL-Caltech / O. Krause et al., combining Hubble (visible), Spitzer (IR) and Chandra (X-ray) data.

But we’ve observed these explosions in great detail, and in particular, we know how quickly this material gets ejected from the stars in their death throes: many hundreds of kilometers per second. That’s fast, sure, but most importantly, it’s too fast for a galaxy that didn’t contain significant amounts of dark matter! Without the additional gravitation of a massive dark matter halo surrounding a galaxy, the overwhelming amount of material ejected from a supernova would escape from galaxies and wind up floating freely in the intergalactic medium, never to become incorporated into future generations of star systems. In a Universe without dark matter, we’d still have stars and galaxies, but the only planets would be gas giant worlds, with no rocky ones, no liquid water, and insufficient ingredients for life as we know it.

Image credit: ESO/L. Calçada.

It’s only the presence of these massive dark matter halos, surrounding our galaxies, that allow the carbon-based life that took hold on Earth — or a planet like Earth, for that matter — to even be a possibility within our Universe. As we’ve come to understand what makes up our Universe and how it came to be the way it is, we’re left with one inescapable conclusion: dark matter is absolutely necessary for the origin of life. Without it, the chemistry that underlies it all could never have occurred.

Ethan Siegel is the writer and founder of Starts With A Bang, and professor of physics at Lewis & Clark College in Portland, OR. His first book, Beyond the Galaxy, is due out later this year.

Pluto may have icy cap

Source: http://www.nature.com/news/pluto-may-have-icy-cap-1.17454

Latest images from New Horizons spacecraft show bright spot near dwarf planet’s pole.

Alexandra Witze

29 April 2015

NASA

A bright spot on Pluto (right) could be nitrogen ice. The dwarf planet is shown with its largest moon, Charon.

Images from NASA’s New Horizons spacecraft suggest that Pluto has a polar cap made of some kind of ice.

Nature Special: Pluto and Ceres

The pictures, taken over the past few weeks and released on 29 April, show Pluto with its largest moon, Charon. The dwarf planet’s surface is mottled with light and dark patches, each measuring hundreds of kilometres across. But its pole remains bright no matter how Pluto rotates, suggesting that a highly reflective icy cap may exist there. “It’s very suspiciously suggestive,” says Alan Stern, the mission’s principal investigator, who is based at the Southwest Research Institute in Boulder, Colorado.

“It’s rare to see any planet in the Solar System at this low resolution displaying such strong surface markings,” he adds. “It’s a mystery whether these bright and dark regions are caused by geology or topography or composition.”

Light and dark patches have previously been spotted on Pluto by the Hubble Space Telescope. New Horizons, which is currently about 90 million kilometres from Pluto, is now taking higher-resolution images than Hubble did, but it is not as sensitive to faint objects. Image-processing techniques have allowed mission scientists to tease out surface details about a month earlier than expected, says project scientist Hal Weaver, of the Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland.

Future measurements by New Horizons’ spectrometer will reveal what Pluto’s bright polar feature is made of. The dwarf planet’s surface is thought to contain a number of ices, including frozen nitrogen, carbon monoxide and methane. Some of these have probably vaporized off the surface to form a thin atmosphere.

The New Horizons mission is putting its raw images online within 48 hours of them arriving back on Earth. This is in stark contrast to the practice of the European Space Agency’s Rosetta team, which is releasing only limited pictures of comet 67P/Churyumov–Gerasimenko.

Secret of record-breaking superconductor explained

Source: http://physicsworld.com/cws/article/news/2015/apr/24/secret-of-record-breaking-superconductor-explained

Conventional superconductivity can occur at much higher temperatures than previously expected, according to calculations made by an international team of physicists led by Matteo Calandraof the IMPMC Institute in Paris. The researchers have developed a theoretical model for the record high-temperature superconductivity reported last year in hydrogen sulphide, which the team says arises from relatively simple interactions similar to those underlying conventional low-temperature superconductors. This is different to other high-temperature materials in which the superconductivity is caused by complicated and poorly understood processes.

Low-temperature superconductors are usually well described by the BCS theory of superconductivity, whereby interactions with lattice vibrations called phonons cause electrons to pair-up to form “Cooper pairs” that can travel through the material without encountering any resistance. Such materials stop superconducting above a transition temperature (TC) fairly close to absolute zero – the highest to date being just 39 K. High-temperature superconductors, in contrast, have transition temperatures up to 133 K.

Despite the vast amount of research done on high-temperature superconductors since the first such material was discovered in 1986, much of the physics underlying their superconductivity remains unknown. This mystery appeared to deepen late last year when Mikhail Eremets and colleagues at the Max Planck Institute for Chemistry in Mainz, Germany, found that when hydrogen sulphide is subjected to extremely high pressure (200 GPa) it has a TC of 190 K. While the TC of high-temperature superconductors can be increased by applying pressure – the current record is 164 K – hydrogen sulphide looks set to become the new record-holder if the measurement can be confirmed.

Conventional yet high temperature

The strange thing about hydrogen sulphide is that – unlike other high-temperature superconductors – it does not also exist in a magnetic state, and therefore more closely resembles a conventional superconductor. This observation led Calandra and colleagues in Canada, China, France, Spain and the UK to use BCS theory as the starting point for their calculations.

Key to understanding superconductivity in hydrogen sulphide are the interactions between electrons and the vibrating hydrogen atoms. Hydrogen has a very low mass and therefore tends to vibrate at relatively high frequencies. These high-frequency modes interact very strongly with electrons and so should result in a superconductor with a very high TC. Indeed, when Calandra and colleagues used BCS theory to calculate the TC of high-pressure hydrogen sulphide, they obtained a value of about 250 K – much higher than the observed 190 K.

The team believes that the actual TC is somewhat lower, because basic BCS theory assumes that the atoms in the material vibrate as simple harmonic oscillators. However, light atoms such as hydrogen undergo more complicated anharmonic oscillations, and this can weaken significantly the interactions that create Cooper pairs. After taking anharmonic effects into consideration in their calculations, Calandra and colleagues calculate a much more realistic TC of 194 K – in close agreement with Eremets’ measurement.

Upping the pressure

The calculations also suggest that the interplay between anharmonic effects and other properties of the material will result in the TCremaining constant in the pressure range 200–250 GPa. While observing this effect in the lab would be a good test of the calculations, Calandra says he is unaware of any measurements above 200 GPa. Indeed, he points out that the 200 GPa experiment was extremely difficult to make, and that Eremets and colleagues are probably the only researchers capable of studying hydrogen sulphide at higher pressures.

“Eremet’s discovery and our theoretical work pave the way for the quest for high-TC superconductivity in hydrides and hydrogen-based materials in general,” says Calandra. “In this class of materials it should be possible to find superconductors with a TC of the same order (or maybe more) than hydrogen sulphide at high pressure,” he adds.

Elisabeth Nicol of the University of Guelph in Canada is enthusiastic about the results. “What is amazing is that this says we can actually have an electron–phonon superconductor that operates at 190 K,” she says. Nicol, who was not involved in the calculations, adds that “While technically the theory of superconductivity itself does not put a limit on TC, consensus has been that electron–phonon superconductors have low TC. Clearly, we are learning that there are still possibilities out there for conventional superconductivity.”

The work is published in Physical Review Letters.

About the author

Hamish Johnston is editor of physicsworld.com

Cyclotron radiation from a single electron is measured for the first time

Source: http://physicsworld.com/cws/article/news/2015/apr/27/cyclotron-radiation-from-a-single-electron-is-measured-for-the-first-time

The cyclotron radiation emitted by a single electron has been measured for the first time by a team of physicists in the US and Germany. The research provides a new and potentially more precise way to study beta decay, which involves the emission of an electron and a neutrino. In particular, it could provide physicists with a much better measurement of neutrino mass, which is crucial for understanding physics beyond the Standard Model.

The Standard Model of particle physics assumes that the mass of neutrinos is zero, but in 1998 the Super-Kamiokande detector in Japan showed conclusively that the particles undergo oscillations and therefore must have mass. Knowing the masses of the three known types of neutrino is crucial to understanding physics beyond the Standard Model, but actually measuring the masses is proving extremely difficult. “Currently, we know more about the mass of the Higgs boson, which was discovered two years ago, than we do about the mass of the neutrino, which was discovered 60 years ago,” saysPatrick Huber of Virginia Tech in the US.

Studies of neutrino oscillations tell us only that the average neutrino mass must be at least 0.01 eV/c2, so researchers are also trying to measure the mass using conservation of energy in beta decay. This is a nuclear process that involves the emission of an electron and a neutrino – strictly speaking, an electron antineutrino. Neutrinos are extremely difficult to detect, so physicists instead measure the energy of the electron and use this to calculate the mass of the neutrino.

Upper bound

The best measurements so far give an upper bound on the electron antineutrino mass of 2.05 eV/c2. Scientists are assembling a new detector called KATRIN at Karlsruhe Institute of Technology in Germany. This should measure a neutrino mass as small as 0.2 eV/c2 – which could still leave a 20-fold uncertainty in its value. But KATRIN is the size of a building, and further improvements in measurement accuracy by this method would require an even larger, more expensive spectrometer.

Now, physicists at Karlsruhe and several universities in the US have set up the Project 8 collaboration, which is taking a different and possibly more elegant approach to measuring neutrino mass. When an electron passes through a magnetic field, its path curves into a circular orbit, and this causes the electron to emit cyclotron radiation at microwave frequencies. The nature of this radiation is dependent on the energy of the electron, and therefore measuring this effect could provide a much more simple and precise technique of measuring the energy than is currently used at KATRIN. The challenge, however, is how to detect the extremely weak femtowatt signal of cyclotron radiation from a single electron.

Now, the Project 8 team has taken an important step in that direction by being the first to detect this cyclotron radiation. Its prototype tabletop apparatus is located at the University of Washington in Seattle, and it uses a centimetre-sized gas cell that is filled with krypton-83 – a gas that undergoes beta decay. In an actual neutrino-mass experiment, the krypton would be replaced with tritium, but this introduces additional technical and safety considerations that will be considered in the future. The cell is placed inside a superconducting coil to generate a magnetic field. Electrons emitted by the beta decay travel in very long circular paths inside the tiny cell, emitting cyclotron microwave radiation, which is then detected by cooled, ultralow-noise detectors.

Small and simple

The researchers measured the energy of single emitted electrons with an accuracy of 30 eV. While this is far too low to obtain a reliable calculation of the neutrino mass, the team is now working to optimize the device to improve its resolution. “The apparatus that we built was very, very small”, says team member Benjamin Monreal of the University of California, Santa Barbara, “And that made the electronics very simple. We’re now preparing the readout designs, the antenna designs, the amplifier designs and the software to try to scale up.”

Huber, who was not involved in the research, is impressed, “They have successfully completed the first, very crucial step,” he says. “From here on, careful engineering and scaling of the device should get them to a point where they can compete with KATRIN.” However, he says, “there are probably more physics experiments that have failed because of ‘mere engineering challenges’ than for any other reason”.

The research is published in Physical Review Letters.

About the author

Tim Wogan is a science writer based in the UK