In this research the seasonal variation of the atmospheric muon rate at the KM3NeT/ARCA detector was studied in order to determine the temperature correlation coefficient αT. KM3NeT is a cubic...Show moreIn this research the seasonal variation of the atmospheric muon rate at the KM3NeT/ARCA detector was studied in order to determine the temperature correlation coefficient αT. KM3NeT is a cubic kilometer neutrino telescope consisting of two large volume water-Cherenkov detectors, ARCA and ORCA, located in the Mediterranean sea. The Cherenkov radiation emitted from high energy muons traveling through the seawater gets detected by an assembly of 31 Photo-multiplier tubes situated inside a sphere shaped Digital Optical Module (DOM). 18 of such DOMs are connected to form a Detection Unit (DU). For the ARCA detector, these DUs are located at the seafloor at around 3.5 km depth, extending vertically to about 2.7 km below sea level. Taking advantage of correlations between hits registered at different PMTs within a DOM, a measurement of the atmospheric muon rate at each DOM can be determined. Furthermore, the difference in height of the DOMs in each DU enables the utilization of the depth dependence of atmospheric muons to precisely determine the muon rate at the ARCA detector. This makes it possible to detect rate variations of a few percent. Additionally, the effective temperature is determined through a weighted integral of the available temperature data above the geographic location of the ARCA detector. Comparing the atmospheric muon rate and the effective temperature during the data taking period of 26.09.2021 until 1.06.2022 a temperature correlation of αT = 1.166 ± 0.128 was established. This is slightly above the theoretically predicted value of 0.86. To verify the robustness of the proposed method of determining the rate and temperature correlation, cross checks were done with Monte Carlo files, background signals and the examination of the depth relationship with time. All returned the expected results. However, when the same method was employed on a smaller data set covering the data taking period between 12.05.2021 and 2.09.2021, no significant correlation between the atmospheric muon rate and the effective temperature could be established. Furthermore, the performed cross checks on this data set did not confirm expectations. This is most likely due to the small data set which is not able to accurately capture the long term seasonal effect. However these results should not be neglected. Therefore, while the employed method did return promising results for a larger set of data, more investigation into the efficiency determination and the errors on the fitted slope are needed to confidently verify the reliability of the final result. It is further suggested to revisit this study once a consistent data set of minimally one year is available.Show less
Observations show that magnetic fields are present and dynamically important in all observed galaxies. It is now well established that these fields were amplified by the magnetic dynamo, although...Show moreObservations show that magnetic fields are present and dynamically important in all observed galaxies. It is now well established that these fields were amplified by the magnetic dynamo, although the details are still unclear. The only practical way to study complex setups like galaxy formation is through numerical simulations. However, including magnetic fields in simulations is a nontrivial task because of the unique solenoidal constraint $\pdot{B}=0$. In this thesis, we used a newly implemented smoothed-particle magnetohydrodynamics (SPMHD) with a divergence cleaning module in SWIFT. We run a galaxy formation in `the cooling halo' and galaxy evolution in `the isolated galaxy' setups, both without stellar feedback. Although SPMHD implementation was thoroughly tested on default tests, we found that it struggles with this setup. We conclude that the issues arise from the density contrast between the forming disk and halo. The divergence cleaning struggles to maintain a low divergence error and sometimes even increases it. This results in a spurious dynamo and, in some cases, a ``numerical explosion'' in internal and turbulent energies. We find that having higher spatial and temporal resolution helps to resolve numerical issues. However, it makes computational cost unrealistic for larger cosmological runs. We propose ideas that can help to fix the problem without high computational cost -- like a more aggressive time-step limiter near the problematic region with density contrast. Finally, we find no dynamo in the runs without numerical issues.Show less
Primordial gravitational waves offer unique insights into the inflationary period and subsequent thermal history of the Universe. The spectrum of primordial high-frequency gravitational waves is...Show morePrimordial gravitational waves offer unique insights into the inflationary period and subsequent thermal history of the Universe. The spectrum of primordial high-frequency gravitational waves is highly sensitive to the processes in the early Universe and can be significantly suppressed during an epoch of early matter domination (EMD) induced by new long-lived massive particles. This damping effect is studied with numerical and analytic methods. The relative energy density of gravitational waves today is found to scale with the wavenumber k as k^(-2) for waves crossing the horizon during the EMD epoch. The overall damping between the start and the end of the EMD epoch is given by m^(4/3) Γ^(-2/3)M^(-2/3), where m and Γ are the mass and decay width of the long-lived particles correspondingly, and M is the Planck mass. For concrete examples of EMD, models with inflaton decay and heavy neutral leptons are considered. Experimental observation of stochastic gravitational wave background could probe early cosmological events and constrain new physics scenarios.Show less
Neural networks are susceptible to minor distortions in their input, which can lead to errors they would not otherwise make. This susceptibility, termed as the network’s robustness, is a crucial...Show moreNeural networks are susceptible to minor distortions in their input, which can lead to errors they would not otherwise make. This susceptibility, termed as the network’s robustness, is a crucial aspect to evaluate. While several methods exist for measuring robustness, they usually suffer from interpretability issues and do not provide a statistical guarantee. In this work, we propose a novel robustness measure that addresses these short- comings by modeling the robustness as a probability distribution and mea- suring its 0.05 quantile. Additionally, previous work suggests the poten- tial modeling of robustness through a log-normal distribution. To eval- uate this hypothesis and its computational benefits, we introduce an es- timator that assumes the distribution is log-normal. A comparison with the standard parameter-free estimator reveals significantly improved com- putational efficiency with the parametrized approach. However, the log- normal assumption requires further research. The assumption is too strong and needs to be relaxed before the parametrized estimator can reliably be utilized.Show less
Learning curves are important for decision making in supervised machine learning. They show how the performance of a machine learning model develops over a given resource. In this work, we consider...Show moreLearning curves are important for decision making in supervised machine learning. They show how the performance of a machine learning model develops over a given resource. In this work, we consider learning curves that model the performance of a machine learning model as a function of the number of data points used for training. For decision making, it is of- ten useful to extrapolate learning curves, which can be done, for example, by fitting a parametric model based on the observed values, or by using an extrapolation model trained on learning curves from similar datasets. We perform an analysis comparing these two techniques with different ob- servations and prediction objectives. When only a small number of initial segments of the learning curve have been observed we find that it is better to rely on learning curves from similar datasets. Once more observations have been made, a parametric model, or just the last observation, should be used. Moreover, we find that using a parametric model is mostly use- ful when the exact value of the learning curve itself is of interest. Lastly, we use this knowledge to improve machine learning on a particle physics dataset.Show less
In the pursuit of designing complex materials with desired properties, un- derstanding their design parameter space is crucial. However, this space’s convolution often hinders comprehension of...Show moreIn the pursuit of designing complex materials with desired properties, un- derstanding their design parameter space is crucial. However, this space’s convolution often hinders comprehension of complex materials’ responses as a function of their design parameters. Machine Learning has recently emerged as a promising tool for capturing patterns in complex design spaces, although this performance often comes at the cost of interpretabil- ity. This thesis aims to explore the design parameter space of interact- ing hysterons using interpretable Machine Learning, specifically Decision Tree inspired methods. Despite the complexity of the design parameter space of even small systems of interacting hysterons, interpretable Ma- chine Learning can classify coarse-grained properties of the system effec- tively. Introducing the Support Vector Classifier (SVC) inspired Decision Tree, we achieve almost perfect isolation of these properties. This model preserves interpretability while effectively probing the statistical structure of design parameter space of systems of interacting hysterons.Show less
One of the best constraints on the warmness of the dark matter comes from the Lyman-$\alpha$ forest observations. Warm dark matter (WDM) free streaming smooths out the smallest structures; in the...Show moreOne of the best constraints on the warmness of the dark matter comes from the Lyman-$\alpha$ forest observations. Warm dark matter (WDM) free streaming smooths out the smallest structures; in the Lyman-$\alpha$ flux power spectrum, this creates a lack of power at such scales. However, different astrophysical effects can create a similar picture. For example, thermodynamic pressure does not allow baryons to collapse into small dark matter structures, making them look larger. Recent high-resolution measurements shifted the source of uncertainty from data statistics to modeling systematics, emphasizing the degeneracy between WDM free streaming and astrophysical effects. In this work, we present a semi-analytical approach to model the pressure effect and WDM free streaming on the Lyman-$\alpha$ flux power spectrum to better study the degeneracy between them. We use the theory developed for the 3D matter power spectrum and show how to apply it to the flux power spectrum. We get an excellent agreement between our modeling and SPH simulations for a wide range of thermal histories, dark matter masses, and redshifts. Finally, using our semi-analytical approach, we show how to constrain the warmness of dark matter, taking into account degeneracy with the pressure effect. We get the following constraint, directly confirmed by running a simulation $m_\mathrm{WDM}\geq\qty{3}{keV}.$Show less
Surface acoustic wave (SAW) resonators can confine and enhance the displacement associated with SAW phonons. SAW resonators are useful in quantum technology, where they are used to enhance the...Show moreSurface acoustic wave (SAW) resonators can confine and enhance the displacement associated with SAW phonons. SAW resonators are useful in quantum technology, where they are used to enhance the coupling between a single phonon and a semiconductor quantum dot (QD). In this thesis, the fabrication process of SAW resonators on GaAs with acoustic mirrors based on aluminum Bragg reflectors, and an investigation into the relation between the finesse of a resonator and the thickness of the aluminum mirrors are detailed. For this purpose, three resonators identical in design apart from the thickness of their aluminum mirrors (35 nm, 50 nm, and 100 nm) are fabricated. The finesse of these resonators is derived by examining their acoustic resonance spectra and displacement maps. Both types of measurements are performed with a fiber-based scanning Michelson interferometer. It is found that losses associated with the resonator limit the finesse. The maximal finesse is found to be F ≈ 11 for the 100 nm resonator. Based on the measurement results, it is hypothesized that reducing the resonator length will lead to a decrease in propagation loss, thereby raising the upper limit of the finesse. This project has been a step towards the optical detection of thermal phonons, with its final goal to detect single phonons.Show less
Dissipation will be introduced in to the Extended Dicke Model (EDM) using the Lindblad equation. Equations of motion and their associated fixed points will be derived using a semi-classical...Show moreDissipation will be introduced in to the Extended Dicke Model (EDM) using the Lindblad equation. Equations of motion and their associated fixed points will be derived using a semi-classical approximation. Total spin of the system will be shown to be a non-conserved quantity. It will be shown that in the Bound Luminosity State the dissipative system can be described by an EOM solely dependent on the y-component of the spin.Show less
In the large N limit, quantum field theories organise themselves into string theories. The AdS/CFT correspondence is an important class of gauge/string dualities. In this paper, we provide a...Show moreIn the large N limit, quantum field theories organise themselves into string theories. The AdS/CFT correspondence is an important class of gauge/string dualities. In this paper, we provide a literature review of a precise AdS_3/CFT_2 duality. We calculate the spectrum for the symmetric product orbifold of T^4 and show that is matches with that of the superstring theory on AdS_3 X S^3 X T^4 with one unit of NS-NS flux. Further support for the duality is obtained by matching the correlation functions at genus 0. Our analysis sheds light on why the two theories are so intimately related; it requires interpreting the worldsheet as the covering space over the boundary CFT. This is captured in a `delta function localization' property of the vertex operator correlation function. When integrated over in worldsheet moduli space, it localizes onto points that holomorphically covers the boundary sphere thus reproducing features of the dual CFT.Show less
Metamaterials feature specific properties that are not commonly found in nature. An example of such a property is input sequence sensitivity, or non-Abelian behavior. Here, we study the driving...Show moreMetamaterials feature specific properties that are not commonly found in nature. An example of such a property is input sequence sensitivity, or non-Abelian behavior. Here, we study the driving sequence dependent response of a non-Abelian metamaterial with four inputs. In previous research, these inputs were actuated with equal strength. However, in this thesis we take a novel approach by first pre-stressing the metamaterial by actuating one beam with a certain strength, and then sequentially actuating and deactuating another pair of beams using a different actuation strength. This allows us to "program" the non-Abelian response by using pre-stress. We explore this two-dimensional actuation space experimentally, and collect the ensuing behavior in a "phase diagram". We find that pre-stressing allows more complex sequential responses than without pre-stressing. In particular, pre-stressing can change the response to sequential actuation from non-Abelian to Abelian and vice-versa. Our work thus uncovers a viable strategy for externally tunable, or programmable, non-Abelian behavior.Show less
Superchirality is a property of light with not yet fully discovered future possibilities in industry and research. In this research, an attempt to obtain a bright superchiral lattice is made by...Show moreSuperchirality is a property of light with not yet fully discovered future possibilities in industry and research. In this research, an attempt to obtain a bright superchiral lattice is made by superposing four laser beams in a particular configuration. Additionally, this superposition should theoretically lead to homogeneous electric fields without modulation, which is potentially useful in microscopy. Recording the field with a simple CMOS camera and observing its fast Fourier transform gives rise to aliasing effects due to undersampling caused by the fact that interference occurs at a subpixel level. This phenomenon is investigated by numeric and analytic simulations. By rotation of the camera, pixel superresolution was achieved, which effectively enables the possibility to investigate the interference patterns at a subpixel level and hence measure the angle between pair of beams with good accuracy. With newly developed beam alignment methods we have achieved and confirmed a beam alignment that is sufficient for production of bright superchirality lattices.Show less
Scanning SQUID-on-tip (SOT) microscopy offers topographic, magnetic and thermal imaging at high sensitivities. This project focused on the development of a SOT from a self-sensing, self-actuating...Show moreScanning SQUID-on-tip (SOT) microscopy offers topographic, magnetic and thermal imaging at high sensitivities. This project focused on the development of a SOT from a self-sensing, self-actuating tuning fork AFM probe. Patterning the superconducting contacts to the SQUID was identified as the main challenge. The non-planar geometry of the probe discourages continuous film growth and prohibits the use of lithography to pattern the film. The superconducting element of the SOT must be electrically isolated from the adjacent tuning fork actuation circuit. Off-axis sputtering of 60nm NbTi was found to minimize short circuits and result in continuous superconducting films. The steps necessary to pattern the NbTi film were identified; Off-axis sputtering at a slight incline with respect to the deposition substrate and a better-fitting micromachined hard mask will enable the fabrication of a SOT atop a tuning fork AFM probe.Show less
The three pressing problems in modern particle physics, neutrino mass, baryon asymmetry and dark matter inspired various models among which many new par- ticles and experiments to either verify or...Show moreThe three pressing problems in modern particle physics, neutrino mass, baryon asymmetry and dark matter inspired various models among which many new par- ticles and experiments to either verify or exclude their existence. Among the most promising examples is the right handed neutrino or heavy neutral lepton (HNL) that has the potential to deal with all the beyond standard model (BSM) physics at once. We use fairly simple and robust pseudo-analytical methods to calculate the sensitivities of various proposed or already running BSM focused experiments among which extracted beamline experiments at CERN (SHiP, SHADOWS and NA62 DUMP), collider experiments at the LHC (MATHUSLA, Codex-b, FASER2 and FACET) and the DUNE ND detector at Fermilab. We found good agreement with sensitivities in the literature, provided a consistent way to compare different experiments and a fast and flexible way of calculating sensitivities that allows for quick adjustment in case of design changes or other developments in the field.Show less