The intergalactic medium (IGM) contains most of the baryonic matter of the Universe and serves as a suitable environment for probing the thermal history of the Universe. The crucial moment in IGM...Show moreThe intergalactic medium (IGM) contains most of the baryonic matter of the Universe and serves as a suitable environment for probing the thermal history of the Universe. The crucial moment in IGM evolution is the Epoch of Reionization, corresponding to the transition from neutral to ionized IGM. However, due to the observational limitations, this period is still not well understood. In this thesis, we focus on constraining IGM thermal history by using Lyman-alpha forests data. This method is applicable in a wide range of temperatures, densities, and ionization fractions of cosmic gas at z about 2 − 5. Observations show that the longitudinal flux power spectrum of the Lyman-α forest exhibits a cut-off at small scales. This phenomenon is caused by thermal Doppler broadening, peculiar velocities along the line of sight (LOS), Hydrogen pressure smoothing, and warm dark matter. The first two effects act only along LOS, while the last two affect all spatial directions. To separate the one-dimensional and three-dimensional effects, we used the method of close quasar pairs, which is based on studying the correlations between Lyman-alpha forests of close quasar pairs. We used the Kolmogorov-Smirnov test to analyze the differences between distributions of phase difference, which characterizes correlations between Lyman-alpha forests. The calculations were performed for various thermal histories, parameters characterizing IGM, LOS separations, and wavenumbers, and accounting for different effects (Doppler broadening and peculiar velocities). Our results indicate that this method can distinguish various thermal histories regardless of the IGM thermal state and one-dimensional effects. Moreover, at separations of the order of pressure broadening, there is a prominent feature caused by different influences of pressure smoothing at large and small scales. In addition, this simple and powerful approach has the potential to distinguish scenarios with warm dark matter.Show less
Acoustic neutrino detection is a promising method to observe ultra high energy neutrinos. These neutrino with energies larger than 10$^{18}$ eV have a relatively low expected flux, thus a large...Show moreAcoustic neutrino detection is a promising method to observe ultra high energy neutrinos. These neutrino with energies larger than 10$^{18}$ eV have a relatively low expected flux, thus a large instrumented volume is required. Current estimations give an instrumented volume of around O(100) km$^3$ using hydrophones as the detection modules. Measuring ultra high energy neutrino would provide us with crucial information on extragalactic sources, the GZK cut-off, and also the C$\nu$B. In this work, an event detection algorithm based on clique, a subspace clustering algorithm, was developed. Furthermore, a first look at event reconstruction was taken. Ultra high energy neutrino events with energies of roughly 5 $\cdot$ 10$^{19}$ eV were approximated by using their characteristic pancake shape. An instrumented volume of 4 km$^3$ was simulated. The study has shown that a hydrophone density of 400 per km$^3$ would provide the desired detection efficiency of near 100\%. At these values a noise rate of 5 Hz can be suppressed using an amplitude criterion besides a causality one. A causality criterion alone suppresses a noise rate of 0.5-0.6 Hz. Furthermore, it was found that a configuration utilizing multiple detector blocks would maximize the effective volume of the detector. Moreover, the hydrophones should be designed for a sensitivity in the range of 0-15 kHz. The reconstruction algorithm tested did not provide the desired results, it is recommended to develop an algorithm specifically for acoustic neutrino detection. We found that design of the detector is a balancing act between detection efficiency, detector size, and noise suppression. Finally, this study demonstrates the possibility of using a clique based approach for event detection in ultra high energy neutrino detection. However, we recommend the development of noise suppression algorithms at the single waveform level, as suppression of a noise rate of 0.5-0.6 Hz or 5 Hz, depending on match criteria, is not enough based on previous research, in which noise rates can be as high as 26 Hz in low noise environments. Machine learning approaches show the most promise here.Show less
In our research project, we conducted an analysis of the impact of dark matter haloes on the motion of the Milky Way. Our study focused on dark matter haloes located within a radius of 200 Mega...Show moreIn our research project, we conducted an analysis of the impact of dark matter haloes on the motion of the Milky Way. Our study focused on dark matter haloes located within a radius of 200 Mega Parsecs (Mpc) from the Milky Way. The cosmological framework we implemented was the Lambda Cold Dark Matter model. Our primary objective was to deter- mine the peculiar acceleration of the Milky Way and derive insights about its motion. To achieve this, we compared the peculiar acceleration to the Hubble rate, a significant parameter in cosmology, as a reference point. By means of our study, we aimed to determine whether the Milky Way would eventually reach the Great Attractor or undergo a change in its direction of motion. Additionally, we generated an all-sky structure map for the haloes to explore the density distribution within each region. This anal- ysis allowed us to examine the concentration of dark matter throughout the universe inside the 200 Mpc from the Milky Way. In our research, we utilized data obtained from the SIBELIUS-DARK project, which provided a robust scientific basis for our study.Show less
Our research is related to testing dark energy/modified gravity theories. We determine the positivity bounds on effective field theories with spontaneously broken Lorentz invariance. We consider...Show moreOur research is related to testing dark energy/modified gravity theories. We determine the positivity bounds on effective field theories with spontaneously broken Lorentz invariance. We consider all the operators in a low-energy — effective field theory (EFT) approach and gain the conditions for EFT coefficients so that a theory is healthy (without instabilities). These conditions are called the positivity bounds, for which a theory works. These positivity bounds can give us constraints about the cosmological model. We mainly follow the paper Positivity bounds on effective field theories with spontaneously broken Lorentz invariance by Paolo Creminelli, Oliver Janssen, and Leonardo Senatore, where the positivity bounds are calculated from the two-point correlation functions of conserved quantities like the Noether current and stress-energy tensor. Then we show how this new mechanism of finding positivity bounds can be used for real cosmological models.Show less
This thesis aims to alleviate the final parsec problem by investigating the hypothetical intermediate-mass black hole environment lying at the cores of galaxies, a model first proposed by Ebisuzaki...Show moreThis thesis aims to alleviate the final parsec problem by investigating the hypothetical intermediate-mass black hole environment lying at the cores of galaxies, a model first proposed by Ebisuzaki et al. (2001) [1]. Although intermediate-mass black holes remain undetected, their nature could be the key to understanding supermassive black hole formation. If they are indeed present at the hearts of galaxies, their mutual interactions encourage supermassive black hole-intermediate-mass black hole merging events. Such merging events bypass theoretical constraints placed by binary dynamics and the Eddington limit, allowing for supermassive black holes to grow into their colossal sizes, and could potentially help explain their existence in the early stages of the Universe’s life. We investigate this model using both a Newtonian (Hermite) and post-Newtonian (HermiteGRX) algorithm. The post-Newtonian algorithm incorporates terms up to order 2.5, allowing it to model gravitational wave emission, which acts as an energy sink source and encourages merging events. In addition to comparing the results found using either algorithm, we forecast its corresponding gravitational wave events. More specifically, assuming a steady intermediate-mass black hole infall rate of one every 7 Myr, we predict a population of NIMBH = 15∼20 residing at the inner 0.4pc of the Milky Way galaxy. In turn, the future gravitational wave interferometer LISA and the proposed one µAres will be able to detect up to 926 supermassive black hole-intermediate-mass black hole merging events per year up to a redshift z ≤ 3. This value is three orders of magnitude larger than those found in various literature ([2]; [3]; [4]; [5]) due to the lack of observation of intermediate-mass black hole leaving a large parameter space in such analysis.Show less