Computational Cosmology

As Cosmology enters its maturity, it is clear that numerical tools are an integral part of the next step of understanding in fundamental science.

Setting the stage

My research adviser at Kenyon College, Dr. Tom Giblin began using numerical simulations to study preheating, an extremely inhomogeneous period that may have existed directly after the end of inflation. It would have occurred as the energy density of the Universe changed from a potential dominated state to a radiation dominated state. In most models, an efficient transition can be obtained through a stage of parametric resonance, during which copious particle production is associated with large gradient energy terms which source gravitational radiation. This radiation immediately decouples from the matter content of the Universe and freely propagates to the present day.

Along with Richard Easther (Yale University) and Eugene Lim (Cambridge University/Kings College London), Dr. Giblin designed an algorithm for robustly simulating various early Universe models and predicting the associated gravitational power spectra.

From 2006-2008, Dr. Giblin wrote and maintained the numerical code and adapted {\sc LatticeEasy} (a publicly available field evolution code) to communicate with our software. This software is one of a few algorithms for the explicit calculation of gravitational wave backgrounds from cosmological scenarios and has become a standard against which newer software is measured.

Identifying barriers

Advances in computer science have given us the ability to use scientific computing methods to simulate complex systems in order to achieve a better understanding of cosmology. In the last decade, Dr. Giblin and members of his lab group developed a new CPU-optimized C++ program known as GABE (Grid And Bubble Evolver).  GABE evolves scalar fields (as well as other purposes) on an expanding background.

The universe is MASSIVE and that means traditional CPU computing methods are limited to small resolution simulation.  This becomes a barrier to research when we want to investigate very small structures present in certain cosmological situations. I joined Dr. Giblin's lab group to develop a GPU accelerated version of GABE.  This meant converting the sequential/OpenMP algorithm functionality into highly-parallel, robust CUDA C/C++ code.  Over the summer of 2014, I developed, integrated and delivered GPU accelerated code to enable GABE to achieve speed-ups over two orders of magnitude. This speed up allows astrophysicists and cosmologists to investigate areas of computational cosmology that were previously thought impossible.



Investigating: Oscillons

Once I had delivered the code, I switched my focus towards using my accelerated code to investigate the phenomena that prompted Dr. Giblin to hire me.  Oscillons are localized, oscillatory, stable solutions to nonlinear equations of motion. Basically a hot energy mass that does resonates and does not go away.  In an expanding background oscillons lose energy, but at a rate that is exponentially small when the expansion rate is slow.

The two clips below display 2D cross-sectional areas of isolated oscillons. The left slice in each video displays energy values of the radiation dominated scalar field and the right slice in each video displays energy values of the matter dominated scalar field.   The presence of expansion causes energy to resonate without dissipating like we expect.  We expect this because the cosmological principle states that the universe is homogeneous (same everywhere) and isotropic (having no preferred direction)

no expansion

no expansion

with expansion

with expansion

falling short

Numerically, a universe that starts with (almost) thermal initial conditions will cool to a final state where a significant fraction of the energy of the universe -- on the order of 50% -- is stored in oscillons. If this phenomenon persists in realistic models, oscillons may have cosmological consequences.

Previous versions of GABE have fallen short. It could take three months for the program to run and generate results. My GPU accelerated version enables the same program to run over night. 



Leveraging the gpu

Using C++ with Linux, I taught myself to accelerate computationally dense processes by writing CUDA code. I developed, debugged, and integrated CUDA C/C++ code by implementing parallel processing techniques commonly used in video processing algorithms. Seeing my impact on my fellow research team cultivated my passion for leveraging state-of-art algorithms to accelerate computational processes to deliver solutions to pre-existing barriers.