The twentieth century for physics was marked by the successful theories of quantum mechanics and the theory of general relativity. However, a unification of these two theories has not yet been...Show moreThe twentieth century for physics was marked by the successful theories of quantum mechanics and the theory of general relativity. However, a unification of these two theories has not yet been achieved and is one of the biggest challenges in modern physics. To test the quantum nature of gravity, Bose et al. proposed an experiment to entangle two massive particles through gravity[1]. This thesis is a step towards this experiment and builds on our previous work [2–4]. We demonstrate stable levitation of a NdFeB particle with a diameter of 12 µm in an on-chip planar magnetic Paul trap. Levitation was observed at atmospheric pressure all the way down to 1· 10−4 mbar. At atmospheric pressure we observed the x, y, γ and β modes, but with very low Q-factors (Q ≈ 5). At lower pressures the Q-factor increases (Q ≈ 3000), and becomes independent of the chamber pressure. The on-chip design opens the possibility to integrate the trap with NV centres to ground state cool the particle. Our results suggest that this method allows the trapping of a 1 µm particle.Show less
In this thesis, we investigate the co-deposition of cesium and antimony to develop efficient Cs3Sb photocathodes for Optical Near-field Electron Microscopy (ONEM), aiming to achieve ultra-smooth,...Show moreIn this thesis, we investigate the co-deposition of cesium and antimony to develop efficient Cs3Sb photocathodes for Optical Near-field Electron Microscopy (ONEM), aiming to achieve ultra-smooth, ultra-thin photocathodes with high quantum efficiency. The project involved implementing, designing and calibrating new equipment, such as a Dual Cluster Source evaporator, a Quartz Crystal Monitor, a QE measurement setup and a custom transfer arm extension with sample heater for temperature controlled growth. All the equipment is installed in the preparation chamber of the ESCHER LEEM, where we have successfully grown several photocathodes. Ultimately, a photocathode grown in the preparation chamber, with a measured moderate QE of around 0.3% at 450 nm light, has been successfully utilized for the first time to obtain an ONEM image of a biological sample.Show less
In this work, a novel approach for tuning a single quantum dot photon source by the quantum confined stark effect by way of external contacts is presented. This is achieved by building a quantum...Show moreIn this work, a novel approach for tuning a single quantum dot photon source by the quantum confined stark effect by way of external contacts is presented. This is achieved by building a quantum dot (QD) single photon source (SPS) from the bottom up. A QD is a semiconductor island grown using self-assembly on a different semiconductor, such that an electron in a QD is confined in three spatial dimensions and its energy band structure changes into discrete energy levels. In this work such a sample with many indium gallium arsenide QDs is investigated, first its known properties are probed in order to confirm that this is a candidate for a ”good” SPS. Then, a mount with external gates is fabricated in which the sample can be inserted, in order to tune its emission frequency by the quantum confined stark effect, which can be used to due the energy levels of the QD. This method is in principle much simpler compared to traditional methods, and we show first steps towards a single photon source using our method.Show less
This thesis explores the connection between the Schwinger effect and Hawking radiation through a heat kernel approach. Through scattering theory, it can be argued that the inverse square potential...Show moreThis thesis explores the connection between the Schwinger effect and Hawking radiation through a heat kernel approach. Through scattering theory, it can be argued that the inverse square potential given by the field equation in the near horizon limit is directly associated with particle production. Through the eigenvalues of this field equation, where solely the inverse square potential is present, the imaginary part of the Lagrangian can be calculated using the poles of the heat kernel. However, this approach involves discarding order terms that are proportional to the eigenvalues. By including these terms, the new non-trivial eigenvalues can be expressed as a sum over the original eigenvalues plus some perturbation, which is assumed to only contribute to the greybody factor. The result is consistent with other approaches and directly shows that the presence of the event horizon is the working mechanism driving particle production analogous to the electric field in sQED.Show less
In recent years, many advancements have been made in the field of Simulation-Based Inference (SBI), due to the lack of tractable likelihoods in modern physics experiments. In the High-Energy...Show moreIn recent years, many advancements have been made in the field of Simulation-Based Inference (SBI), due to the lack of tractable likelihoods in modern physics experiments. In the High-Energy Physics (HEP) literature, a popular choice for doing SBI is by using binary classifiers, which can be used to obtain likelihood ratios by means of the likelihood-ratio trick. In the Astrophysics literature, on the other hand, more research is done on Normalizing Flows, which directly model the likelihoods. In this thesis, we compare the two methods, assessing their performance on a general HEP problem: inference of a signal ratio in the presence of a nuisance parameter. We perform this comparison on both a toy Gaussian example and a realistic Higgs decay example and do not find a clear winner over the two cases. We do find interesting qualitative differences, especially for poorly performing models, suggesting that it may be beneficial to implement both methods rather than selecting just one.Show less
Measuring the masses of galaxy clusters can contribute to accurately test- ing cosmological models. A few methods are available to estimate the masses of clusters, which, however, return vastly...Show moreMeasuring the masses of galaxy clusters can contribute to accurately test- ing cosmological models. A few methods are available to estimate the masses of clusters, which, however, return vastly different mass estimates. The exact reasons for these discrepancies are not well understood. Here we show that these differences also occur in the mass measurement of clusters in the SIBELIUS-FLAMINGO simulation. The masses were esti- mated using three methods based on the dynamics of galaxies within a cluster, the X-ray luminosity and the Sunyaev-Zeldovich effect from gas in the cluster. Since the structures in the SIBELIUS simulation are recon- structed from the physical universe, the clusters’ mass results can be com- pared directly with the true mass of the particles in the simulation and with observational mass estimates from other studies. While the dynam- ical method was the least accurate, using the Sunyaev-Zeldovich method on the SIBELIUS-FLAMINGO data was the most precise method in pre- dicting the true mass of the simulation. Since Sunyaev-Zeldovich effect mass estimates tend to produce lower results than dynamical estimates; this means there are fewer supermassive clusters in the local cosmic envi- ronment which is consistent with predictions from ΛCDM.Show less
In this research, we study the oxidation process of the Transition Metal Dichalcogenide (TMD) niobium diselenide, NbSe2. TMDs are a category of van der Waals materials, of which samples are...Show moreIn this research, we study the oxidation process of the Transition Metal Dichalcogenide (TMD) niobium diselenide, NbSe2. TMDs are a category of van der Waals materials, of which samples are obtained by exfoliation. The measurements are performed with the Low Energy Electron Microscope (LEEM), which measures reflectivity at different energy, resulting in the so-called IV curves. To analyze oxidation, we have developed a new method to obtain in situ exfoliated flakes in the LEEM set-up. First, we measure in situ cleaved NbSe2 flakes (bulk). The reflectivity upon adding (pure) oxygen is followed. Additionally, the reflectivity of ex situ cleaved NbSe2 flakes is assessed. Here, an intensity boundary between flake’s edge and center is recognized. The reflectivity measurements show that the electronic structure is different across the boundary: a V-shaped reflectivity minimum appears, which sharpens over time upon oxidation. The boundary is further researched with Atomic Force Microscopy and Energy Dispersive X-ray analysis. Additionally, we perform roughness analysis and Principal Component Analysis. The latter provides an alternative method to follow the change in electronic properties over time. We propose that already degraded NbSe2 flakes are more susceptible to oxidatin damage, compared to in situ exfoliated flakes. Upon further measurements, the in situ cleaved samples do not show any degradation signs, therefore we attribute the presence of an intensity contrast with the associated IV features, to oxidized NbSe2.Show less
This thesis explores the initial steps towards integrating high temperature scanning SQUID-on-tip (SOT) with quartz tuning fork atomic force microscopy (QTF-AFM). By combining these imaging...Show moreThis thesis explores the initial steps towards integrating high temperature scanning SQUID-on-tip (SOT) with quartz tuning fork atomic force microscopy (QTF-AFM). By combining these imaging techniques into one sensor, the local magnetic field variations and surface topography of a sample can be mapped simultaneously. This allows the SQUID to scan with ultra-sensitive flux sensitivity and nanometer spatial resolution, while its position on the surface remains identifiable. Specifically, this thesis addresses the low operating temperature of conventional SOT by developing a fabrication method that uses BSCCO, a high temperature superconductor, to create SQUIDs through gallium focused ion beam (FIB) milling. The electrical contacting procedure of BSCCO involves mechanical exfoliation and electron-beam lithography. The results yield contact resistances in the order of 100 ohm, which are sufficiently low to perform current transport experiments. The flakes are then structured into 1 micrometer SQUIDs. The Josephson junctions are created by introducing ion beam induced damage to the crystal lattice of BSCCO to suppress superconductivity. The transport measurements reveal no conclusive evidence of SQUID features. However, it is shown that milling sub-200 nm wide structures does not alter the electronic properties of BSCCO, indicating that this nanostructuring method can potentially be applied in fundamental research into high temperature superconductors. This thesis also focuses on depositing multiple SQUID electrodes along a QTF, while keeping the self-sensing and -actuating capabilities of the force sensor intact. The QTF is insulated with a 100 nm thick SiOx layer. It is then covered with a laser micro-machined hardmask, through which 50 nm of titanium is evaporated in the shape of SQUID electrodes. Through fabrication alterations, certain issues involving alignment and electrode interruption can be solved. However, the evaporation method inexplicably compromises the integrity of the insulating barrier, thereby forming electrical shorts. Overall, the findings indicate that while substantial progress has been made in developing fabrication methods for the different components, significant technical hurdles remain. These need to be addressed to realize the potential of BSCCO scanning SQUID-on-tip for atomic force microscopy.Show less
In this research, a recently proposed renormalization group approach for networks to the case of random directed graphs is being generalized: we present a scale-invariant description of directed...Show moreIn this research, a recently proposed renormalization group approach for networks to the case of random directed graphs is being generalized: we present a scale-invariant description of directed networks containing reciprocated edges. This allows us to neglect several strong assumptions that are currently necessary to renormalize directed networks such as financial transaction networks. As an application, a model of ING’s transaction data has been derived across multiple coarse-grained partitions. In this article we provide detailed information on how this particular model has been structured and how its parameters are obtained. We show how we can use this model to determine the expected cumulative degree and weight distributions of ING’s transaction network across multiple coarse-grained partitions of the network which we will compare to the empirical degree and weight distributions, respectively.Show less
This thesis focuses on the task of separating detector events caused by atmospheric neutrinos from those caused by atmospheric muons. Performance on this task is analysed using simulated data of...Show moreThis thesis focuses on the task of separating detector events caused by atmospheric neutrinos from those caused by atmospheric muons. Performance on this task is analysed using simulated data of these events as they are detected in the KM3NeT/ORCA10 detector setup. We present a new procedure for training the Machine Learning (ML) classifiers that handle this separation task. This most notably includes separating the data into track- and shower- like events, and training separate classifiers on these subsets of data. We show a significant improvement in the resulting neutrino signal when compared to the current classification procedure.Show less
This paper presents an initial approach to investigate the feasibility of fabricating FSF heterostructures, for potential applications in spintronics. These FSF stacks could help characterize long...Show moreThis paper presents an initial approach to investigate the feasibility of fabricating FSF heterostructures, for potential applications in spintronics. These FSF stacks could help characterize long-range triplet Cooper pairs that are created at its interfaces. Simulations inform the final design of a 3D FSF stack, composed of one low and one high aspect ratio rectangular shape. Subsequent efforts were directed on the focused ion beam milling of apertures in 1 µm thick Si3N4 membranes that are used for shadow evaporation of the heterostructures. Several stacks of cobalt and niobium were fabricated with these membranes through shadow evaporation. These heterostructures were contacted via electron beam lithography and measured in a vectormagnet cryostat. Results indicate that no long-range triplets were generated inside the FSF stacks. It can be concluded that either the niobium layers did not become superconducting or the magnetization of the two ferromagnets in the stacks were not perpendicular.Show less
At the moment, quantum computer development is in the NISQ (Noisy Intermediate Scale Quantum) stage. This means that quantum computers are relatively small and exhibit large amounts of noise. To...Show moreAt the moment, quantum computer development is in the NISQ (Noisy Intermediate Scale Quantum) stage. This means that quantum computers are relatively small and exhibit large amounts of noise. To run any mean- ingful computation, small noise-resistant circuits are necessary. This work proposes a new algorithm, QASNEAT, for finding small noise-resistant circuits. The performance is evaluated by ground-state energy estimation of three small molecules with shot noise and physical depolarizing noise. QASNEAT is able to find small accurate circuits both in noisy and noise- less casesShow less
One of the greatest remaining puzzles in physics is what particle dark matter consists of. For this project, the theory of dark pions is considered, a Hidden Valley model that extends the Standard...Show moreOne of the greatest remaining puzzles in physics is what particle dark matter consists of. For this project, the theory of dark pions is considered, a Hidden Valley model that extends the Standard Model with new, dark particles and a new force, dark QCD. A sensitivity study is performed to determine how many dark pions are expected to be in acceptance of the LHCb detector for Run 2 conditions; the LHCb is well-suited to search for particles in the considered O(1) GeV mass and O(1) - O(100) ps lifetime range. Additionally, a framework has been developed to study the dependence of the sensitivity on a number of theoretical parameters of the dark QCD model, namely the probability to form a dark vector meson instead of a dark pion, the number of colours in dark QCD, the dark QCD scale, and the Higgs mass. It is found that O(100) dark pions are in LHCb acceptance for different track categories, and that the considered the- oretical parameters do not drastically change the number of expected particles (with some small caveats), staying within a difference of about 20%. This is acceptable given the expected experimental uncertainty, showing theory inde- pendent searches for dark pions are possible.Show less