My Minimalism

I seem to have developed (both consciously and subconsciously) a rather minimalistic system of movement and behaviour. I am minimalist in my writing style, true, but also in many other ways. My appeal to minimalism seems to have come inherently from my autism. Where there are more (self imposed) restrictions, and a reduced range of freedom, I feel ironically more at home.

Tension relaxes me.

I am far more comfortable in clothing which is rather restrictive, but also rather aesthetic. It means there is an imperative on my part to maintain that aesthetic. It gives my self awareness something to to be aware of. Now as a dancer, I have learned that the best form of movement for me is, also, a more restrictive form. The clothing I have recently adopted for dance (the style of animation/robotics) does not allow me to move very much. One would think this an inversely proportionate relation for aiding in a dance performance, but actually I have found the opposite: when precision is most necessary, you want to be focused, and indeed you want to make sure that you are conditioned to be precise even when allowed the full range of movement.

Dance style – and style in general – is not about what you can do, but about what you choose to do. Minimalism reflects the quality of choice, whereas maximalism reflects the quantity of choices available. I simply choose the option where my judgement in decision making is validated.

As for my writing – well I could of course discuss that, but I think I shall let this post be its own artefact.

I feel you can get the joke if you read between my lines, for that is where a minimalist writes most elaborately.       

Advertisements

Is there Really a Cosmological Constant? Or is Dark Energy Changing with Time?

  • I wanted to talk about dark energy today, however I feel that Dr Sabine Hossenfelder’s (one of my favourite physics writers for the public) article about it for Forbes does a brilliant job of depicting some of the current views on its experimental status in physics. I thought I would share that with you first, and I shall write further on dark energy in later posts. 

 

11-2-Cosmic-Evolution-GSFC-1200x635.jpg
The history of the Universe tells the story of a race between gravitation and expansion, until about six billion years ago, when dark energy becomes important.

According to physics, the universe and everything in it can be explained by just a handful of equations. They’re difficult equations, all right, but their simplest feature is also the most mysterious one. The equations contain a few dozen parameters that are – for all we presently know – unchanging, and yet these numbers determine everything about the world we inhabit. Physicists have spent much brainpower questioning where these numbers come from, whether they could have taken any other values than the ones we observe, and whether their exploring their origin is even within the realm of science.

One of the key questions when it comes to these parameters is whether they are really constant, or whether they are time-dependent. If they vary, then their time-dependence would have to be determined by yet another equation, which would change the entire story that we currently tell about our Universe. If even one of the fundamental constants isn’t truly a constant, it would open the door to an entirely new subfield of physics.

Loop-Diagram-1200x675
Representative of the energy inherent to space itself, the cosmological constant (or dark energy) is thought to arise from the zero-point energy of empty space. It is assumed to be a constant, but that’s not necessarily true.

Perhaps the best-known parameter of all is the cosmological constant: the zero-point energy of empty space itself. It is what causes the universe’s expansion to accelerate. The cosmological constant is usually assumed to be, well, a constant. If it isn’t, it can be more generally referred to as ‘dark energy.’ If our current theories for the cosmos are correct, our universe will expand forever into a cold and dark future.

The value of the cosmological constant is infamously the worst prediction ever made using quantum field theory; the math says it should be 120 orders of magnitude larger than what we observe. But that the cosmological constant has a small, non-zero value that causes the Universe to accelerate is extremely well established by measurement. The evidence is so thoroughly robust that a Nobel Prize was awarded for its discovery in 2011.

Cosmic_distance_ladder-1200x780.jpg
The construction of the cosmic distance ladder involves going from our Solar System to the stars to nearby galaxies to distant ones. Each “step” carries along its own uncertainties; the Type Ia supernova step is the one that resulted in the 2011 Nobel Prize.

Exactly what the value of the cosmological constant is, though, is controversial. There are different ways to measure the cosmological constant, and physicists have known for a few years that the different measurements give different results. This tension in the data is difficult to explain, and it has so-far remained unresolved.

One way to determine the cosmological constant is by using the cosmic microwave background (CMB). The small temperature fluctuations between different locations and scales in the CMB encode density variations in the early universe and the subsequent changes in the radiation streaming from those locations. From fitting the CMB’s power spectrum with the parameters that determine the expansion of the universe, physicists get a value for the cosmological constant. The most accurate of all such measurements is currently the data from the Planck satellite.

comboimage-1200x989.jpg
Three different types of measurements, distant stars and galaxies, the large scale structure of the Universe, and the fluctuations in the CMB, tell us the expansion history of the Universe.

Another way to determine the cosmological constant is to deduce the expansion of the universe from the redshift of the light from distant sources. This is the way the Nobel-Prize winners made their original discoveries in the late 1990s, and the precision of this method has since been improved. In addition, there are now multiple ways to make this measurement, where the results are all in general agreement with one another.

But these two ways to determine the cosmological constant give results that differ with a statistical significance of 3.4-σ. That’s a probability of less than one in a thousand to be due to random data fluctuations, but admittedly not strong enough to rule out statistical variations. Multiple explanations for this have since been proposed. One possibility is that it’s a systematic error in the measurement, most likely in the CMB measurement from the Planck mission. There are reasons to be skeptical, because the tension goes away when the finer structures (the large multipole moments) of the data are omitted. In addition, incorrect foreground subtractions may be continuing to skew the data, as they did in the infamous BICEP2 announcement. For many astrophysicists, these are indicators that something’s amiss either with the Planck measurement or the data analysis.

ESO
One way of measuring the Universe’s expansion history involves going all the way back to the first light we can see, when the Universe was just 380,000 years old. The other ways don’t go backwards nearly as far, but also have a lesser potential to be contaminated by systematic errors.

But maybe it’s a real effect after all. In this case, several modifications of the standard cosmological model have been put forward. They range from additional neutrinos to massive gravitons to actual, bona fide changes in the cosmological constant.

The idea that the cosmological constant changes from one place to the next is not an appealing option because this tends to screw up the CMB spectrum too much. But currently, the most popular explanation for the data tension in the literature seems to be a time-varying cosmological constant.

future_universe_300dpi-1200x927
The different ways dark energy could evolve into the future. It’s assumed it will remain constant but if it increases in strength (into a Big Rip) or reverses sign (leading to a Big Crunch), other fates are possible.

A group of researchers from Spain, for example, claims that they have a stunning 4.1-σ preference for a time-dependent cosmological constant over an actually constant one. This claim seems to have been widely ignored, and indeed one should be cautious. They test for a very specific time-dependence, and their statistical analysis does not account for other parameterizations that might instead be tried. (The theoretical physicist’s variant of post-selection bias.) Moreover, they fit their model not only to the two above mentioned datasets, but to a whole bunch of others at the same time. This makes it hard to tell why their model seems to work better. A couple of cosmologists who I asked about this remarkable result and why it has been ignored complained that the Spanish group’s method of data analysis is non-transparent.

1024px-Shapenoise.svg_.jpg
Any configuration of background points of light — stars, galaxies or clusters — will be distorted due to the effects of foreground mass via weak gravitational lensing. Even with random shape noise, the signature is unmistakeable.

Be that as it may, just when I put the Spaniards’ paper away, I saw another paper that supported their claim with an entirely independent study based on weak gravitational lensing. Weak gravitational lensing happens when a foreground galaxy distorts the image shapes of more distant, background galaxies. The qualifier ‘weak’ sets this effect apart from strong lensing, which is caused by massive nearby objects – such as black holes – and deforms point-like sources to arcs, rings, and multiple images. Weak gravitational lensing, on the other hand, is not as easily recognizable and must be inferred from the statistical distribution of the ellipticities of galaxies.

The Kilo Degree Survey (KiDS) has gathered and analyzed weak lensing data from about 15 million distant galaxies. While their measurements are not sensitive to the expansion of the universe, they are sensitive to the density of dark energy, which affects the way light travels from the galaxies towards us. This density is encoded in a cosmological parameter imaginatively named σ_8, which measures the amplitude of the matter power spectrum on scales of 8 Mpc/h, where h is related to the Hubble expansion rate. Their data, too, is in conflict with the CMB data from the Planck satellite.

hdf_lensing.jpg
The overlay in the lower left hand corner represents the distortion of background images due to gravitational lensing expected from the dark matter ‘haloes’ of the foreground galaxies, indicated by red ellipses. The blue polarization ‘sticks’ indicate the distortion. This reconstruction accounts for both shear and weak lensing in the Hubble Deep field.

The members of the KiDS collaboration have tried out which changes to the cosmological standard model work best to ease the tension in the data. Intriguingly, it turns out that ahead of all explanations, the one that works best has the cosmological constant changing with time. The change is such that the effects of accelerated expansion are becoming more pronounced, not less.

In summary, it seems increasingly unlikely the tension in the cosmological data is due to chance. Cosmologists are justifiably cautious, and most of them bet on a systematic problem with either the Planck data or, alternatively, with the calibration of the cosmic distance ladder. However, if these measurements receive independent confirmation, the next best bet is on time-dependent dark energy. It won’t make our future any brighter, though. Even if dark energy changes with time, all indications point towards the universe continuing to expand, forever, into cold darkness.

Hawking Radiation: A Brief Introduction

Hawking radiation (a phenomenon first theorised by Professor Stephen Hawking in a 1975 paper entitled ‘Particle Creation by Black Holes’ [1]) is what happens when virtual particles form near the horizon of a black hole. The Hawking radiation is not actual radiation but it effectively behaves in a similar way to black body radiation.

A virtual particle is a temporary violation of the conservation of energy law, in that a particle is created “from nothing.” This is caused by zero-point energy fluctuations in the quantum foam. However because the particle exists for too short a time to be measured, it gets away with it as far as physics is concerned. The virtual particle is in reality an entangled system of a particle-antiparticle pair, which annihilate each other almost instantaneously.

HawkingRadiation
Source

These ‘quantum jitters’ as they are sometimes called, snatch gravitational energy away from the black hole in order to occur. Despite there being only a tiny distance between the pair, the black hole’s gravitational forces have a remarkably steep gradient, and the two particles may feel drastically different tidal forces acting on them. One of them may escape, but the other may not. Upon escaping the black hole’s clutches, a free particle will become realised through ‘measurement’ as it interacts with the universe around it. Because of this, the virtual gravitational energy it stole also becomes real. The black hole’s gravitational energy loss converts to the loss of a small amount of its effective mass. Yes it does absorb one of the particles (which also becomes ‘real’ through entanglement with its twin), but it paid for two.

A lonely black hole, starved of food, will eventually evaporate away in this manner. Thus the energy conservation law is not violated after all.

The Incongruous Nature of Worldviews

I have noted a fundamental (inverse) relationship between the mindset of people who become religious fanatics, and that of people who become scientists.

We all want a worldview which is both correct and stable. But there is an unfortunate kind of uncertainty here: in order for your worldview to evolve, you must first renounce aspects its current state. This makes it unstable; In order for your worldview to remain stable, it must also remain fixed. This makes it prone to inconsistencies.

The scientist strives to reduce their ignorance of the world, and is more concerned with correctness than stability. The illusion of stability comes about due to the rigour involved in building scientific knowledge.

The fanatic strives to be certain of the world, and is more concerned with stability than correctness. The illusion of correctness comes about due to the authority of the spiritual leader and the surety with which the sacred texts disclose their knowledge (gnosis).

The scientist has a living worldview, and it comes from the living; the world. The fanatic has a dead one, and it comes (apparently) from the non-living; outside the world.

Like a tree which appears to stand solid and unmoving in the short term, the scientist’s worldview is constantly growing in both height and girth over time; if it were to become petrified – rigid, as in the fanatic’s worldview – it would, like a rock, erode away instead.

Though the petrified tree may remain for a long time, it will ultimately be eroded into dust. Though the living tree may be less permanent, it will in the end triumph: for as any individual tree may eventually die, its progeny will live on.

In the same way, science has a true legacy, but cults, religions and fixed belief systems simply break and crumble into more and more pieces over time, less and less coherent; any individual offspring less and less significant; indeed even, less stable.

This is my view.