Experimental diffusion-weighted MRI measurements of a fiber phantom were compared to signals generated using a Monte-Carlo diffusion simulation. The diffusion simulation was combined with a...Show moreExperimental diffusion-weighted MRI measurements of a fiber phantom were compared to signals generated using a Monte-Carlo diffusion simulation. The diffusion simulation was combined with a generally applicable MRI simulation. We performed simulations for square packed and random packed cylinders that model the fibers. Good agreement was found between the simulated signal and the measured signal for a specific random packing type (the relative error was 0.09+-0.06). Follow-up simulations that use larger system sizes are needed to improve the accuracy. The simulation method presented here can be used to study changes in microstructural properties and to compare the efficiency of different MRI protocols in detecting these changes.Show less
TNO is developing a carbon-contamination-insensitive EUV power sensor that uses the photo-electric effect to distinguish in- from out-of-band EUV photons. The central question of this MSc research...Show moreTNO is developing a carbon-contamination-insensitive EUV power sensor that uses the photo-electric effect to distinguish in- from out-of-band EUV photons. The central question of this MSc research project was to verify if the EUV power sensor signal can be purified by suppressing the out-of-bound secondary electron signal using an electrostatic barrier, and, if so, which potential difference suffices. To this end, we designed a Secondary Electron Energy Distribution (SEED) analyzer to characterize the e-beam-induced secondary electron emission of gold and carbon targets. It was shown that the SEED analyzer allows filtering of electrons on their kinetic energy and could perform SE yield measurements as well as SE energy distribution measurements. However, systematic errors occurred in the form of secondary electron emission of the grid inside the SEED analyzer, leakage current flow, a loss of emitted electrons through the SEED analyzer’s opening and deflection of electrons due to lack of a field-free region. The SE yield measurement results were in good agreement with literature, after estimating the effect of the systematic errors. The SE energy distribution of both target materials were obtained and show similarities to experimental data reported in literature. However, the absence of a field-free region during the measurements causes a small mismatch between our acquired SE energy distribution and the reference data.Show less
This thesis investigates acoustic phenomena associated with the presence of a synthetic gauge field in a mechanical metamaterial. Such fields minimally couple to the momentum of phonons in a low...Show moreThis thesis investigates acoustic phenomena associated with the presence of a synthetic gauge field in a mechanical metamaterial. Such fields minimally couple to the momentum of phonons in a low energy limit, which leads to acoustic analogs of some of the effects of a gauge field in an electronic system, such as Landau quantization. We develop two strategies for realizing a pseudo-magnetic field in a metamaterial based on the honeycomb lattice. In the first strategy, we consider deformations of the lattice that result from applied boundary stress. In the second strategy, we use nonuniform patterning of the local material stiffness. We then explore physical phenomena associated with a constant pseudo-magnetic field. We provide evidence for the existence of a mechanical Landau-level spectrum in this metamaterial. We then focus on the zeroth Landau level and show that the corresponding modes are localized in the bulk of the system and exist mostly on one sublattice. Following recent insights into similar physical systems, we investigate topologically robust sound modes along domain walls in the bulk of the metamaterial. Further, by introducing dissipation, we test selective enhancement of the domain-wall-bound topological sound mode, a feature that could potentially be exploited for the design of acoustic couplers, rectifiers, and sound amplification by stimulated emission of radiation (SASERs) -- mechanical devices analogs to lasers.Show less
Mini-max is a concept often used for solving games. The idea behind it is a constant alternation of minimizing and maximizing the value of moves to account for an adversarial opponent in the game....Show moreMini-max is a concept often used for solving games. The idea behind it is a constant alternation of minimizing and maximizing the value of moves to account for an adversarial opponent in the game. Lots of established methods have been developed to allow computers to play games like Chess [2] and Go [12]. Many of these methods involve evaluating sequences of moves to determine the best move to play. Because games like Chess and Go are quite big, it is infeasible to evaluate all possible sequences, so we resort to algorithms that pick sequences selectively, collected under the name Monte Carlo Tree Search [5]. These methods, in their quest to find the best move, already try to play as optimal as possible while figuring out the best move. We think that by letting go of the desire to only sample good sequences, and instead only caring for a good conclusion on the best move, we can improve on current algorithms. We do this by adapting Best-Arm Identification’s objective to fit a Mini-max structure: Mini-max Action Identification. We believe that this has not been done before. In Section 2 we will establish the framework and details of Mini-max Action Identification. We define the problem of finding an optimal algorithm in two ways: In Section 3 we define the problem as the algorithm that provides the best guaranteed performance on the hardest set of parameters. In Section 4 the problem will be based on parameters following a fixed distribution. Further algorithms will be provided in Section 5. In this Section the algorithms will be compared in their performance as well. Lastly we present some findings on the worst-case set of parameters in Section 6, providing proofs on a couple of these findings.Show less
Mean-Variance optimization is widely used to find portfolios that make an optimal tradeoff between expected return and volatility. The method, however, struggles with a robustness problem since the...Show moreMean-Variance optimization is widely used to find portfolios that make an optimal tradeoff between expected return and volatility. The method, however, struggles with a robustness problem since the portfolio weights are very sensitive towards change of the input parameters. There is a vast literature on methods that tries to solve this problem and we discuss two of these methods: resampling and shrinkage. In addition to the methods from the literature, we develop a new method which we call maximum distance optimization. The resampling method attempts to obtain more robust portfolios by changing the optimization procedure. The shrinkage method attempts to obtain more robust portfolios by making the estimation of the input parameters more robust. The maximum distance optimization method explores a region closely beneath the efficient frontier and determines what kind of portfolios are nearly optimal, but have very different portfolio weights. First, we show that any convex combination of these near-optimal portfolios is also near optimal. Second, we show that the set of near-optimal portfolios is robust. Apart from the robustness, the advantage of this method is that we now obtain a whole scope of solutions, instead of a single portfolio, which Mean-Variance optimization provides. Since the region is robust, the investor or consultant can use his own qualitative arguments to select a preferred portfolio from this region.Show less
Several methods are available to estimate standard errors of a coefficient. Two-level Mokken scale analysis is an ordinal scaling technique that accounts for a multilevel test structure, where the...Show moreSeveral methods are available to estimate standard errors of a coefficient. Two-level Mokken scale analysis is an ordinal scaling technique that accounts for a multilevel test structure, where the subjects to be scaled are scored by various raters. Key in this analysis are the two-level scalability coefficients. Recently, standard errors of these coefficients have been estimated using the delta method. It is uncertain whether this method results in biased standard error estimates. The individual level bootstrap method is often regarded as an unbiased estimation method of standard errors. An extension of this method is the cluster level bootstrap, which maintains the dependency structure in the data. This simulation study compares these three methods on their bias, efficiency, coverage, and computation time. Results indicate that the difference in bias, efficiency, and coverage favoured the individual level bootstrap, although the difference was in most conditions very close to zero. Since computation time was much higher for the bootstrap methods, the delta method is preferred in practice.Show less
The nucleosome consists of a short stretch of DNA wrapped around a protein cylinder, and is the fundamental unit of chromatin, which compacts the DNA into the cell nucleus. The nucleosome is known...Show moreThe nucleosome consists of a short stretch of DNA wrapped around a protein cylinder, and is the fundamental unit of chromatin, which compacts the DNA into the cell nucleus. The nucleosome is known to transiently partially unwrap or 'breathe' \textit{in vitro}, exposing DNA which would otherwise be sterically inaccessible to enzymes. Breathing is investigated for its potential importance \textit{in vivo} in both essential DNA processes, and in higher-order chromatin organisation. In this thesis we present a two-parameter physical statistical model of the breathing process based on steric enzyme accessibility, the energetics of the bent DNA molecule, and the adsorption of the DNA upon the proteins. We estimate the elastic energy using Monte Carlo simulations of a coarse-grained model of the nucleosomal DNA, and we fit the model to the available experimental results. We find in agreement with experimental studies that site accessibility decays exponentially toward the centre sites, and that highly asymmetric breathing behaviour is possible due to the very sensitive dependence of breathing upon energy distribution, and in turn, sequence.Show less
Nowadays data from randomised experiments is used to assess relationships beyond the primary goal of the study. From the perspective of secondary analysis, the exposure of interest is no longer...Show moreNowadays data from randomised experiments is used to assess relationships beyond the primary goal of the study. From the perspective of secondary analysis, the exposure of interest is no longer allocated at random to the experimental units, and the existence of confounders is almost certain. Randomised clinical trials usually have longitudinal nature. In this case the confounding may be time-dependent, i.e. the value of the confounder (as well as the exposure) may vary over time. Furthermore, the exposure often influences future values of the confounders. This two-way relationship is referred to as exposure-confounder feedback. In this thesis we use data from a randomised controlled trial for chemotherapy in osteosarcoma to illustrate the methodology for causal inference in the presence of timedependent confounding and exposure-confounder feedback. We build Marginal Structural Models (MSMs) for binary and time-to-event outcomes, and use inverse-probabilityof-treatment weighted (IPTW) estimation method. The thesis novelty is twofold. First, we illustrate how to build MSMs by using chemotherapy data. Second, we discuss how to simultaneously assess the causal effects of time-varying and point exposures. The study findings indicate that closer collaboration between oncologists and surgeons is required when treating osteosarcoma patients, since surgery delay has a strong negative effect on patients’ histological response (HRe) (an intermediate outcome, which indicates chemotherapy effectiveness). Although the data provides some evidence for a weak protective effect of surgery delay on the hazard of death, there is indication that surgery delay has much stronger negative effect on the hazard of disease progression and/or cancer recurrence. Based on the results from the analyses, revision of the chemotherapy drugs dosage might be discussed in the clinical community. We found that smaller doses are associated with good HRe and decrease the chance of any long-term adverse event.Show less
In this thesis, we probe the bending stiffness of origami metamaterials, to investigate under which conditions origami can be described as continuum media. The Miura Ori pattern was bent using two...Show moreIn this thesis, we probe the bending stiffness of origami metamaterials, to investigate under which conditions origami can be described as continuum media. The Miura Ori pattern was bent using two mechanical tests: three point bending test and cantilever bending. Our origami metamaterials at rest can be characterised by the opening angle between adjacent plates, which specifies how much the structure is folded. We varied two things, the width and the opening angle. The bending stiffness of the Miura Ori sheet at different widths showed significant deviations from continuum classical elastic theory. These deviations differed in behaviour and this was dependant on the opening angle of the sheet. When the Miura Ori sheets were almost flat folded, an continuum mechanical behaviour was seen when the width was small. The deviations were seen when the width increased. When the sheets were opened, possible finite size effects were determined that corresponded to Cosserat elasticity. Tests showed that the bending stiffness increased with the opening angle. This contradicts previously made theoretical predictions.Show less
In this thesis we show two methods for genotyping tetraploid species using bivariate modelling approaches. Using SNP markers we obtain a data set of fluorescent intensity signals, which are modeled...Show moreIn this thesis we show two methods for genotyping tetraploid species using bivariate modelling approaches. Using SNP markers we obtain a data set of fluorescent intensity signals, which are modeled to obtain an estimate for the genotypes. Previously, fitTetra was the R package of choice for analyzing this type of data and we aim to create new methods to improve the genotyping assignments of this package. The first method analyses genotype assay data by modelling the signal intensities directly, utilizing a linear regression approach to determine assignments. The second method is similar to fitTetra, modelling the ratio of signal intensities and the summed signal intensities. A regression approach is used to obtain a set of conditional means which are used to determine genotype assignments.Show less
If G is a locally compact group, then L 1 (G) acts by convolution on L p (G). This action yields a homomorphism of the Banach algebra L 1 (G) into the bounded operators on L p (G). For finite p it...Show moreIf G is a locally compact group, then L 1 (G) acts by convolution on L p (G). This action yields a homomorphism of the Banach algebra L 1 (G) into the bounded operators on L p (G). For finite p it is known, with the help of rather complicated tools, that this action gives a lattice homomorphism of L 1 (G) into the regular operators on L p (G). In this master’s thesis we generalize this result by letting L 1 (G) act on so-called translation invariant Banach function spaces on G, or, more precisely, on the largest subspaces of such spaces where this action can be meaningfully defined. We show, using methods that are considerably simpler than those used in the previous proof, that under mild assumptions this action is also a lattice homomorphism.Show less
We propose and discuss by means of examples different methods of finding an Efficient Set and a Pareto Frontier in multi-objective optimization problems. Among those methods are both the...Show moreWe propose and discuss by means of examples different methods of finding an Efficient Set and a Pareto Frontier in multi-objective optimization problems. Among those methods are both the probabilistic and the deterministic search e.g. ‘the gradient method‘, Karlin theorem, Karush-Kuhn-Tucker condition. We get the Efficient Set in the explicit form and then we find the Pareto Frontier in parametric or explicit form. The key part of the thesis is a method for finding the Efficient Set and the Pareto Frontier for the implicitly defined objective functions. We consider first the general case and later focus on the special case of two equations of the form: f1(x1, ..., xn)−φ1(y1, y2) = 0, f2(x1, ..., xn)−φ2(y1, y2) = 0 that define two implicitly defined objective functions y1 = y1(x1, ..., xn), y2 = y2(x1, ..., xn). Next we study the Pareto optimization problem to minimize y1 and y2. The solution of this problem is divided into two parts. In the first part we solve the problem of finding the Pareto Efficient Set for functions that are given explicitly. In the second part we discuss and propose methods for finding the Pareto Frontier and Efficient Set for implicitly given objective functions. The results of this work were presented at 28th European Conference on Operational Research "EURO 2016" in Poznań, Poland on July 3-6, 2016, Emmerich M., Sklyar M. "Computing Pareto Fronts of Implicitly Defined Functions", Conference Handbook p. 20.Show less
The aim of this study was to computationally resolve nucleosome dynamics and chromatin structure. To achieve this we ran Monte Carlo simulations of a base pair level model of a mononucleosome....Show moreThe aim of this study was to computationally resolve nucleosome dynamics and chromatin structure. To achieve this we ran Monte Carlo simulations of a base pair level model of a mononucleosome. Additionally, we developed a graphical user interface for generating a chromatin structure with realistic linker DNA, which enabled us to calculate linking number and writhe for different chromatin structures. The force extension curve of our simulated mononucleosome shows similar behaviour to force spectroscopy experiments.Show less
The supply chain network of companies that produce goods and deliver them to customers, can be captured in a mathematical model. ORTEC Supply Chain Design (OSCD) uses a mixed integer mathematical...Show moreThe supply chain network of companies that produce goods and deliver them to customers, can be captured in a mathematical model. ORTEC Supply Chain Design (OSCD) uses a mixed integer mathematical program to design a supply chain network while optimizing a certain output (e.g. minimize cost). The input parameters of such a program, like transport cost and customer demand, not only have to be estimated, but they also have ranges of uncertainty. Consequently, the output (the network design) is uncertain as well. In this work the impact of that uncertainty on the output, in the mathematical model of OSCD, is investigated. Generic effects that hold for several case studies of OSCD are presented. In addition a tool for precise estimation in unique cases is proposed. This work could be used to further extend the relationship between input and output of more complex case studies of OSCD.Show less