Parafermionic zero-modes are zero-energy excitations with peculiar mutual statistics, which can be realized at the edge of the Fractional Quantum Hall Effect sample. We came up with several...Show moreParafermionic zero-modes are zero-energy excitations with peculiar mutual statistics, which can be realized at the edge of the Fractional Quantum Hall Effect sample. We came up with several protocols for adiabatic quantum pumping with parafermions, which allow to test the statistics of Fractional Quantum Hall quasiparticles and observe universal noise in the pumping current. That is, the noise takes a specific value which is essentially given by universal constants, and is robust with respect to changes in many system parameters.Show less
Gold nanorods (GNRs) have unique optical properties. GNRs can be excited in the near-infrared range and their photoluminescence is bright and stable. Because of this, GNRs have a large range of...Show moreGold nanorods (GNRs) have unique optical properties. GNRs can be excited in the near-infrared range and their photoluminescence is bright and stable. Because of this, GNRs have a large range of possible applications, including use as labels or as biosensors. For these kinds of applications, it is important to be able to determine a GNR’s properties with high accuracy. Here we characterize single gold nanorods by five properties: their 3D position, plasmon resonance and orientation. The position of GNRs is determined with a sub-nanometer error in x, y and a 3 nm error in z. The surface plasmon resonance wavelength and the orientation of GNRs are determined with errors of <0.1 nm and 0.1 deg respectively. This is achieved by applying a four-dimensional fit to a stack of two-photon photoluminescence images. The methods presented in this thesis can be used to improve accuracy in the aforementioned applications of GNRs.Show less
Next to its well–known helix structure, double stranded DNA can form alternative structures that might have biological importance. For example, in guanine–rich DNA sites of the c–MYC promotor a...Show moreNext to its well–known helix structure, double stranded DNA can form alternative structures that might have biological importance. For example, in guanine–rich DNA sites of the c–MYC promotor a second order structure called a G–Quadruplex has been found. In the G–Quadruplex, one strand of the DNA forms a stack of 4 interacting guanines. In this thesis we study the formation of G–Quadruplexes in double–stranded DNA using a combination of F¨oster Resonance Energy Transfer (FRET) and multiplex Magnetic Tweezers (MT). Moreover, a two–state model was developed which describes the probability to form a G–Quadruplex in double–stranded DNA. Using this model we calculated how the extension and the FRET efficiency depends on force, twist and the sequence of the DNA. Because the synthesis of double–stranded DNA containing a G–Quadruplex site proved challenging, the experimental data could not be compared to the outcomes of the two–state model. Based on simulations we conclude that adding a 3–bp mismatch to the DNA tether next to the G4 site is required for the formation of a G–Quadruplex in dsDNA. Our findings may be relevant for understanding a link with transcription and/or replication.Show less
Many practical optimisation problems can be formulated as a traffic assignment problem, i.e. optimally route a multi-commodity flow through a network. In order to do this, a network is defined that...Show moreMany practical optimisation problems can be formulated as a traffic assignment problem, i.e. optimally route a multi-commodity flow through a network. In order to do this, a network is defined that can capture congestion and a notion of optimal flow. The shortest path problem is derived as a sub-problem of the traffic assignment problem, discussing several algorithms that can solve it. In addition, several speed-up techniques for the shortest path problem are described that can be applied to static networks. In conclusion, an algorithm is discussed that solves the traffic assignment problem by iteratively solving a shortest path problem.Show less
Master thesis | Statistical Science for the Life and Behavioural Sciences (MSc)
open access
The area under the receiver operating characteristic (ROC) curve (AUC) is a commonly used measurement for the discriminative ability of a model. For the time to event variable in survival analysis...Show moreThe area under the receiver operating characteristic (ROC) curve (AUC) is a commonly used measurement for the discriminative ability of a model. For the time to event variable in survival analysis the case and control sets will vary over time, thus a dynamic definition of AUC is required. We choose the dynamic AUC defined by incident true positive rate and dynamic false positive rate (I/D AUC) proposed by Heagerty and Zheng [6]. However, the difficulty to empirically obtain the incident true positive rate is hampering the estimation of dynamic AUC. Thus, several semi-parametric and non-parametric estimators are proposed. Heagerty and Zheng [6] proposed the semi-parametric estimation method based on Cox model. The non-parametric estimates using intermediate concordance measure with LOWESS smoothing is raised by van Houwelingen and Putter [14]. Based on the same intermediate concordance measure, SahaChaudhuri and Heagerty suggested to use locally weighted mean rank smoothing [10]. Recently, Shen et al proposed a semi-parametric method by adopting fractional polynomial to fit the dynamic AUC [12]. In this thesis, we compare the performance of these methods with different configuration in a series of simulations. The plain Cox methods is not recommended when the proportional hazards assumption is not satisfied. The Cox model with time-varying coefficients are relatively stable when the marker has a mediocre effect. For the non-parametric methods, a too wide span/bandwidth may lead to large bias, and a too narrow span/bandwidth may lead to unstable estimates, thus, the trade-off between the bias and the standard deviation has to be made. For fractional polynomial, adding extra fractional polynomial terms does not benefit the performance. In addition, many researchers observed a decreasing trend of I/D AUC over time in their empirical studies [10][12][6], yet Pepe et al. held the opinion that the I/D AUC may be an increasing function over time [7]. We investigate the trend of I/D AUC under a Cox model and binary marker setting. However, we observe that under certain Cox models, the I/D AUC curve first increases then decreases, thus I/D AUC is not necessarily a decreasing function of time.Show less
Fusarium crown rot is a damaging disease frequently found in wheat, which is mostly caused by Fusarium pseudograminearum. Crown rot resistance is quantitatively inherited complex trait caused by...Show moreFusarium crown rot is a damaging disease frequently found in wheat, which is mostly caused by Fusarium pseudograminearum. Crown rot resistance is quantitatively inherited complex trait caused by many small effects loci. Lines with crown rot resistance have been identified that are interesting potential sources of favorable resistance alleles. In order to make use of this resistance in agriculture, lines need to be developed which contain a combination of high crown rot resistance and other agro-economically important traits, like high yield. In this study, these lines have been bred by crossing multiple crown rot resistant donor lines with elite lines. The lines are mostly selected by phenotype, but scoring of crown rot severeness is difficult. In this study genomic prediction of crown rot resistance was explored to partially or totally replace phenotyping in the selection process. Two different generations in a complex wheat population (an early generation CRI0 lines and a later generation CRI2 lines) were used to make genomic prediction models and to validate these models. A genomic best linear unbiased prediction (G-BLUP) model, a linear model based on a set of selected markers and a Gaussian kernel model were trained on the early generation (CRI0 lines) and validated on the later generation (CRI2 lines). Prediction based on early generation information was disappointingly low, so suggesting that phenotyping cannot be fully avoided at the later generation. The alternative of partial phenotyping at the late generation yielded more encouraging results. The Gaussian kernel model and the G-BLUP model yielded a predicted ability of about 0.41. While still further researcher is needed, the results so far imply genomic prediction within a population could be useful to select highly crown rot resistant lines.Show less
We will discuss non-local solutions to some of the problems of the standard model of cosmology, L cold dark matter (LCDM), focusing on two models of gravity and their applications to cosmology. The...Show moreWe will discuss non-local solutions to some of the problems of the standard model of cosmology, L cold dark matter (LCDM), focusing on two models of gravity and their applications to cosmology. The first comes from modifying the Einstien-Hilbert action by including an m2R 1 2 R term and the second by including an m2 1 R term. Both models posses self-accelerating solutions. I will demonstrate that their background cosmology is consistent with data, and testable primarily through the equation of state of our universe’s effective stress-energy tensor. At the perturbative level, these models have more galaxy clustering and weak lensing, so they are be highly testable using up coming cosmological surveys. My contribution to this work is the perturbation theory of the m2 1 R model and the recovery of these results for the m2R 1 2 R model.Show less
The Multi-Armed Bandit(MAB) problem is named after slot machine games. When playing slot machines, one player has to decide which machine to play, in which order to play them and how many times to...Show moreThe Multi-Armed Bandit(MAB) problem is named after slot machine games. When playing slot machines, one player has to decide which machine to play, in which order to play them and how many times to play each machine. After the choice, that specific machine will offer a random reward from a probability distribution, and the player’s target is to maximize the sum of rewards earned through a sequence of lever pulls. In order to figure out the distribution of each machine as soon as possible and to get as much profit as possible, we will consider the popular Thompson Sampling (TS) method, which is based on Bayesian ideas. TS is a heuristic for choosing actions that maximize the expected reward with respect to a randomly drawn belief.[9] In this thesis, for the first half part, we test the performance of TS and also compare a variation called Top-two Thompson Sampling(TTTS) method to the normal TS, based on uniform sampling. Computationally, TTTS is a slow algorithm, so we also try to improve its performance and create another algorithm: Top-two Gibbs Thompson sampling, which combines TTTS and Gibbs Sampling methods and improves the computation speed of the TTTS method. In the second half of the thesis, we try to take a step forward in the application of TS, so we combine TS with the Maximin Action Identification(MAI) problem. Maximin is a concept from two-player zero-sum games in game theory. The main idea behind maximin action selection is a constant alternation of minimizing and maximizing the value of moves to account for an adversarial opponent in the game. Existing methods of maximin are related to games such as Checkers, Chess and Go. We try to broaden its application area and apply it to our new algorithms created in the first half part. First of all, the time budget is limited and we use different divisions to test the performance; Afterwards, we create a new algorithm to pick out the right arm with one step for two layers. The results show that there is not any significant difference between the time division method, which consists of even time budgets for he child layer and no budget for parent layer, and the Maximin Thompson Sampling method.Show less
Master thesis | Statistical Science for the Life and Behavioural Sciences (MSc)
open access
Game trees have been utilized as a formal representation of adversarial planning scenarios such as two-player zero-sum games like chess [1, 2]. When using stochastic leaf values based on Bernoulli...Show moreGame trees have been utilized as a formal representation of adversarial planning scenarios such as two-player zero-sum games like chess [1, 2]. When using stochastic leaf values based on Bernoulli trials to model noisy game trees, a challenging task is to solve the Monte Carlo Tree Search (MCTS) problem of identifying a best move under uncertainty. Confidence bound algorithms are investigated as one solution, with focus on the FindTopWinner algorithm by Teraoka, Hatano, and Takimoto [3], which uses (a) the minimax rule to evaluate the game tree by alternately minimizing and maximizing over the values associated with each move, (b) Hoeffding’s Inequality to estimate sample size requirements by fixing precision and error probability, and (c) an epoch-wise pruning regime to reduce investment on suboptimal nodes. We experimented on this algorithm by equipping it with methods that are based on (i) Bernstein’s Inequality to create a tighter confidence bound [4], (ii) the Law of the Iterated Logarithm (LIL) to sample in single-sample steps, allowing for exact pruning and stopping [5, 6], and (iii) a combination of both. An empirically-derived Hoeffding-based Iterated-Logarithm confidence bound will be proposed in a fully refurbished FindTopWinner algorithm, which achieved much better performance in terms of samples required to find a best move, whereas the Bernstein-based approaches did not fare better than the original by Teraoka et al. [3]. Possible reasons such as limited, more asymptotic advantages for Bernstein-based algorithms will be discussed and the recommended parameter space for the empirically-derived Hoeffding-based confidence bound will be provided.Show less
A hinge specifically designed for continuous friction measurements during ice skating was tested and used. The hinge can handle large vertical normal forces to simulate the weight of a real person...Show moreA hinge specifically designed for continuous friction measurements during ice skating was tested and used. The hinge can handle large vertical normal forces to simulate the weight of a real person on a skate, and is very flexible in the horizontal direction, so it deforms under a friction force. Two sensors on the hinge measure the deformation. Friction measurements were done with a part of a real skate, with varying temperatures, skating speeds and normal forces on the skate. A clear dependence of friction on temperature was found. Friction coefficients for an ice temperature of -20 C and air temperature of -10 C varied between 0.04 and 0.1, and coefficients for an ice temperature of -10 C and air temperature of -6 C varied from 0.006 to 0.016. The temperature of the skate was held at -10 C for both cases. The results also suggest friction dependence on skating speed and normal force, but this has to be verified. During the calibration of the setup it was found that the vertical force, controlled by air pressure, could be determined up to a factor of 2. Furthermore there was a large variation (up to a factor 2) in friction coefficients from measurements under the same circumstances, on the same ice layer. These could have been caused by changing humidity in the setup, as this was not monitored during the measurements. The setup works, but needs to be improved for more precise friction measurements. A humidity sensor in the setup is recommended.Show less
In this thesis we are going to study the mechanical properties of a chromatin fiber. Chromatin is the second compaction stage of DNA, after the wrapping of DNA around histones proteins to form...Show moreIn this thesis we are going to study the mechanical properties of a chromatin fiber. Chromatin is the second compaction stage of DNA, after the wrapping of DNA around histones proteins to form nucleosomes. Specifically we are going to analyze how its behaviour under external stresses is going to change with the variation of the linker DNA length, the DNA segment that links two adjacent nucleosomes. We will be able to do it at a single-molecule level thanks to the use of magnetic tweezers, an apparatus that can exert forces and torques directly to individual molecules.Show less
This work examines the network structure of illicit marketplaces that operate on the darknet. These on-line marketplaces are crawled to obtain data of inter-user communications and this data is...Show moreThis work examines the network structure of illicit marketplaces that operate on the darknet. These on-line marketplaces are crawled to obtain data of inter-user communications and this data is parsed in a network structure and its physical properties are analysed. The Configuration Model is used as a null model to investigate the patterns in these networks to reveal information about their topology. This information is applied to interpret the behaviour of users within these illegal marketplaces.Show less
By temporally and spatially overlapping a fundamental femtosecond pulse (800 nm wavelength) and its second harmonic (400 nm wavelength) at a focal point in air, plasma is generated which is a good...Show moreBy temporally and spatially overlapping a fundamental femtosecond pulse (800 nm wavelength) and its second harmonic (400 nm wavelength) at a focal point in air, plasma is generated which is a good source of intense and ultrabroadband terahertz waves. We study the correlation between the spectral properties of the two-color laser-induced air plasma and the amplitude of the emitted terahertz electric field while varying the relative phase between the 800 nm and 400 nm beams. We find that the amplitude of the terahertz electric fi eld shows an oscillating behavior when changing the relative phase. In particular, for 0.67 fs time delay between the two beams, which corresponds to a phase shift of pi, terahertz waves with opposite polarities are obtained. However, the spectrum of the ultraviolet light emitted from the laser-induced air plasma does not show any noteworthy changes when varying the relative phase. Therefore, we conclude that there is no correlation between the amplitude of the emitted terahertz electric field and the spectrum of the two-color laser-induced air plasma.Show less