This thesis discusses the games Hex, and two variants Cylindrical Hex and Torus Hex. We start by giving the rules of the games, and showing that no tie can take place, meaning that there will...Show moreThis thesis discusses the games Hex, and two variants Cylindrical Hex and Torus Hex. We start by giving the rules of the games, and showing that no tie can take place, meaning that there will always be a winner. After that we discuss some existing strategies for Cylindrical Hex and program a Pure Monte-Carlo player to play this game. From smart strategies observed from the Pure Monte-Carlo player, a new strategy is determined for Cylindrical Hex. To test this new strategy, experiments are carried out and discussed.Show less
Compactifications in mathematics date back to the late 19th and early 20th centuries. Maurice Fr´echet and Felix Hausdorff both laid the foundational work on compact spaces, the concept that every...Show moreCompactifications in mathematics date back to the late 19th and early 20th centuries. Maurice Fr´echet and Felix Hausdorff both laid the foundational work on compact spaces, the concept that every open cover has a finite subcover. The notion of compactification emerged as an extension of these compact spaces. The motivation behind compactifications was to find ways to extend a given space in a topological sense by adding limit points or ’points at infinity’. The process of compactification in topology entails changing a given topological space into a compact space, and there exist various methods to achieve this goal. The idea underlying compactifications is the embedding of the original topological space into a compact one. Within the scope of this thesis, our focus lies specifically on metric compactifications, which involve the embedding of metric spaces into compact spaces. Notably, in the case of the real numbers, this process entails the addition of the points ? and ??. In the first chapter, we introduce some fundamental concepts and results that are necessary for this thesis. Then, we define the compactification of metric spaces in the second chapter, where we will give an example of the real numbers. We will also demonstrate how we can extend isometries to homeomorphisms on metric compactifications. In the final chapter, we consider the metric compactification of the Euclidean d-dimensional space equipped with the p-norm. This compactification is particularly interesting due to the fact that we can explicitly compute the ’points at infinity’. To determine these points, we utilize the fact that the space is metrizable.Show less
This research was performed to determine which type of model gives a better prediction of the drugs transport into and inside a tumor cell. Two types of models have been developed with compartment...Show moreThis research was performed to determine which type of model gives a better prediction of the drugs transport into and inside a tumor cell. Two types of models have been developed with compartment modeling using the partition coefficient and concentration gradient of a compound. We compared the simulations of these models with varying the drug’s partition coefficient. The model with intermediate steps to cross the membrane takes more time than the model without these steps to reach the partition in equilibrium of the drug in the several compartments. This suggests that the model including these steps give a better prediction of real life drug transport. From a certain value for the partition coefficient, the drug does not enter the cell faster if we increase this value. These results suggest that the model with the intermediate steps is most effective for modeling data for different compounds to test whether they would be suitable drugs to treat cancer.Show less
This thesis discusses a lesser known formulation of quantum mechanics called deformation quantisation. This theory provides a more intuitive way to study quantum systems, in which confusing topics...Show moreThis thesis discusses a lesser known formulation of quantum mechanics called deformation quantisation. This theory provides a more intuitive way to study quantum systems, in which confusing topics such as the relation between the operator commutator and the classical Poisson bracket and the concept of classical limit become very obvious. After a short introduction, Chapter 2 introduces the mathematical foundation of Hamiltonian classical mechanics, symplectic geometry, and states the Darboux Theorem. Chapter 3 develops the theory of Hochschild cohomology and gives a classification of the cohomology spaces of the algebra of smooth functions on a manifold. In Chapter 4, deformations of algebras are defined and results of Hochschild cohomology are used to prove lemmas about star products, which are smooth deformations of the algebra of smooth functions on a manifold. And Chapter 5 introduces the special case of the Moyal star product and shows how it can be used to obtain deformation quantisation. In this chapter, it is also shown that deformation quantisation is completely equivalent to the Hilbert space formalism which is traditionally taught in undergraduate studies, and the simple harmonic oscillator is treated as an example. Finally, Chapter 6 summarises the possible benefits and drawbacks of teaching deformation quantisation instead of the Hilbert space formalism and lists some avenues of further studyShow less
In tonal languages such as Mandarin Chinese, the meaning of a word depends on the pitch variation of the tone. Since tones are often not pronounced in isolation, but rather concatenated,...Show moreIn tonal languages such as Mandarin Chinese, the meaning of a word depends on the pitch variation of the tone. Since tones are often not pronounced in isolation, but rather concatenated, neighboring tones effect each other. This gives rise to tonal coarticulation. In this thesis, we will explore if, given two concatenated tones of the Mandarin word “ma”, it is possible to predict the following tone on the basis of the coarticulation effect present in the first tone, and vice versa. The phonetic data used for this exploration hold a certain intrinsic smoothness that points naturally towards the functional data analysis domain as a tool to study them. Therefore, we will be using multiple functional data analysis techniques. We will start with k-means clustering on the raw data with the Euclidean distance and the Manhattan distance. Afterwards, we will study the effect on tone duration, for which we will be using duration analysis. Furthermore, previous research indicates that the coarticulation effect lies at the level of covariances. Hence, we will also be clustering functional covariances. In the last section, indications of the results will be discussed and suggestions will be made for further research. Lastly, plots obtained from the analyses are shown in the appendixShow less
We study the mathematics and physics involved in the generation of gravitational waves by stellar mass binary black holes and their subsequent detection by LISA, a space based interferometer...Show moreWe study the mathematics and physics involved in the generation of gravitational waves by stellar mass binary black holes and their subsequent detection by LISA, a space based interferometer detector. We show that LISA will be capable of detecting nearby binary black holes with a maximal relative distance error of 0.2 and skylocation error of 1 square degree if the total mass of the binary is at least eighty solar masses.Show less
Directed topology is a fairly new field of mathematics with applications in concurrency. It extends the concept of a topological space by adding a notion of directedness in which directed paths...Show moreDirected topology is a fairly new field of mathematics with applications in concurrency. It extends the concept of a topological space by adding a notion of directedness in which directed paths play a very important role. There are direction preserving maps between directed spaces called directed maps. A special case of these is a directed path homotopy that transforms one directed path into another. Using these deformations, directed paths are partitioned into equivalence classes and a special category, the fundamental category, can be linked to a directed space. In this thesis we will explain these definitions and present a special theorem: a directed version of the Van Kampen Theorem. This theorem allows the calculation of fundamental categories by combining local knowledge about paths. Our main contribution is the formalization of this material using the Lean proof assistant and we show how we have implemented this.Show less
Quantum computing has the potential to revolutionise the field of cryptography. Quantum money is a cryptographic scheme that attempts to create unforgeable currency. This thesis investigates the...Show moreQuantum computing has the potential to revolutionise the field of cryptography. Quantum money is a cryptographic scheme that attempts to create unforgeable currency. This thesis investigates the knot-based quantum money scheme proposed by Farhi et al.[FGH+12], which assumes that finding transformations between equivalent knots is computationally demanding. We start by providing a comprehensive understanding of the relevant concepts of knot theory, particularly the Alexander polynomial. Next, we discuss the proposed quantum money scheme. Finally, we discuss implementation challenges on a quantum simulatorShow less
There are many different classes of variable astrophysical sources (which have a luminosity that varies over time), all having a certain physical phenomenon causing their variability. This results...Show moreThere are many different classes of variable astrophysical sources (which have a luminosity that varies over time), all having a certain physical phenomenon causing their variability. This results in different characteristic light curves, often containing periodicities within certain ranges of frequencies. The Gaia satellite telescope has gathered photometric data of variable sources, at semi-random nonuniform observation times, influenced by the Gaia scanning law. This research aims to use the nonuniform fast Fourier transform (NUFFT) to retrieve the main frequency of the brightness variations of the variable source from the photometric Gaia Data Release 3 data, where it is assumed that the underlying signal has one main frequency. The main goals are to investigate whether the frequency with maximal power in the NUFFT periodogram is the main frequency of the underlying signal and whether it is possible to distinguish between in this way correctly and incorrectly determined frequencies. To this end a simulation of photometric data is used, where the time series are taken from actual Gaia DR3 data and the signal is simulated as a sine wave with a known frequency and a signal to noise equal to 5. Taking the frequency with maximal power from the corresponding periodograms results in a correct retrieval in about 90% of the simulated cases. A positive correlation between the number of data 4 points or visibility periods and the fraction of correctly determined frequencies is found. The incorrectly determined frequencies are most likely caused by spurious periods or aliasing. Furthermore, a method to compute a false alarm probability (FAP) for the determined frequency was investigated, but turned out to give no useful results as almost all FAPs were equal to zero. Therefore, further research on other methods is necessary to find out how to correctly identify the main frequencyShow less
Analog computers can be a fast and energy-efficient way of simulating ordinary differential equations. Simulating partial differential equations requires more work. One method is approximate all...Show moreAnalog computers can be a fast and energy-efficient way of simulating ordinary differential equations. Simulating partial differential equations requires more work. One method is approximate all but one of the dimensions in the PDE with a grid. This yields a large system of ODEs of the form x? = f(t, x). Analog computers can solve systems of this form up to a given size, depending on the number of components available in the analog computer. To be able to solve larger systems of ODEs, the system x? = f(t, x) can be split up into smaller groups of equations, which may depend on each other. To prevent having to solve all smaller systems at once, an iterative method is used. When a value from a different group is needed, the value from the previous iteration is used. In this thesis, a PDE-to-ODE compiler (PTOC) is introduced, which automatically converts systems of PDEs into iterative systems of ODEs. Furthermore, it is proven that the iterative method described above converges locally to the solution of the original system of ODEs. When f has the additional property that it is Lipschitz continuous, the iterative method converges globally to a solution. Lastly, a heuristic for dividing the system of ODEs into groups is introduced, which aims to reduce the amount of data that needs to be stored by the analog computer. The application of these techniques is implemented in the PTOC tool.Show less