If 𝑚 denotes the number of digits in the regular continued fraction expansion that can be determined from 𝑛 digits in the decimal expansion, then Lochs’ Theorem states that the fraction 𝑚 𝑛...Show moreIf 𝑚 denotes the number of digits in the regular continued fraction expansion that can be determined from 𝑛 digits in the decimal expansion, then Lochs’ Theorem states that the fraction 𝑚 𝑛 converges Lebesgue almost surely to a fraction of two entropies as 𝑛 → ∞. These are the entropies of the interval maps that generate these expansions. Lochs’ Theorem has been generalized to pairs of interval maps that both belong to a class of piecewise monotonic transformations that generate expansions and that admit an invariant density with suitable ergodic properties. The first aim of this thesis is to review sufficient conditions on interval maps to belong to this class. For this, we first of all recover the famous existence result for invariant densities by Lasota and Yorke for expanding piecewise monotonic interval maps. As an example of a nonexpanding piecewise monotonic interval map, we also consider the Liverani-Saussol-Vaienti (LSV) map and provide a new proof of the already known result that such a map admits an invariant probability density if and only the corresponding parameter lies in (0, 1). Motivated by the practical use of beta encoders, one of the main goals in this thesis is to extend Lochs’ Theorem to expansions generated by a class of random piecewise monotonic interval maps. We review sufficient conditions on random interval maps to belong to this class. For two random interval maps 𝑇 and 𝑆 in this class, we show that, if 𝑚 denotes the number of digits in the 𝑆-expansion that can be determined from 𝑛 digits in the 𝑇-expansion, then, roughly speaking, the fraction 𝑚 𝑛 converges Lebesgue almost surely to a fraction of two fiber entropies as 𝑛 → ∞. As a second important goal, we prove that the skew product of an LSV map with parameter in (0, 1) and another LSV map with parameter in [1, ∞) and with underlying Bernoulli shift admits an invariant probability density.Show less
Many investors use optimization as a tool to make investment decisions. An investor decides which proportion of his wealth to invest in which asset class, thus composing his investment portfolio....Show moreMany investors use optimization as a tool to make investment decisions. An investor decides which proportion of his wealth to invest in which asset class, thus composing his investment portfolio. Given all asset classes available to the investor, with optimization the investor tries to find the portfolio with the most favourable risk and return trade-off. Historical data and models are used to estimate asset classes’ risks and returns. A problem with this approach is that optimal portfolios are often sensitive to variation in the uncertain risk- and return estimates, as shown in [18] and [23]. An alternative is to compute not only the optimal portfolio, but also the portfolios that are near-optimal, see [16]. This results in a near-optimal region of portfolios that can be shown to be more robust. The method to compute near-optimal regions is however computationally intensive. In this thesis, we reduce the computation time to determine a near-optimal region, without losing significant accuracy. When a near-optimal region is known, an investor still needs to decide which (near-optimal) portfolio to invest in. Near-optimal region estimates can be objects of high affine dimension, which are difficult to grasp and navigate for practitioners. In this thesis, we use polytope theory and show how this can be used to study the near-optimal region in more detail.Show less
The lattice packing problem in dimension 9 is a long-standing open problem. We consider a classical algorithm by Voronoi to solve the lattice packing problem in any dimension, by enumerating all...Show moreThe lattice packing problem in dimension 9 is a long-standing open problem. We consider a classical algorithm by Voronoi to solve the lattice packing problem in any dimension, by enumerating all perfect quadratic forms. We show an improved upper bound on the number of non-similar perfect forms based on a volumetric argument and lattice reduction theory. Furthermore, we consider the challenges that arise when enumerating perfect forms, with a special look at dimension 9.Show less
In Computed Tomography, it is often standard to scan an object with the projection angles spread equidistantly over the full rotation of the object. If the number of projection angles is relatively...Show moreIn Computed Tomography, it is often standard to scan an object with the projection angles spread equidistantly over the full rotation of the object. If the number of projection angles is relatively small (e.g. because of radiation damage), the choice of the distribution of these angles is important. Especially if the object has certain main directions, in which the most important features are aligned, the reconstruction can be improved by selecting angles in these directions. In this thesis, possibilities for improving the choice of projection angles are investigated. Three angle selection algorithms are discussed: a greedy algorithm, a coordinate descent algorithm and an adaptation of the ensemble Kalman Filter algorithm. The algorithms try to find the minimum of a cost function, which is based on the L 2 -norm. The performance of these algorithms is shown on three computer generated phantoms and one real-world image, generated from scanning a wooden tree stem in the FleX-Ray scanner at CWI. The results of choosing angles with the algorithms are compared with choosing equidistant projection angles. The results show that especially the coordinate descent method is capable of finding projection angles that lower the cost function. In real life situations the true image is not available. Therefore, the possibility of using training data to estimate the cost function in that case, is investigated. We assume that these training samples come from the same probability distribution as the true image. Then, averaging over the cost function of these training samples improves the choice of projection angles with respect to equidistantly chosen angles.Show less
In this thesis we aim to do two things, in the first three sections we develop some arithmetic intersection theory in the style of Gillet-Soul´e. When doing intersection theory one uses Chow’s...Show moreIn this thesis we aim to do two things, in the first three sections we develop some arithmetic intersection theory in the style of Gillet-Soul´e. When doing intersection theory one uses Chow’s moving lemma to move divisors to rational equivalent ones so that they intersect properly. When doing intersection theory over fields the intersection numbers you get this way by taking degrees only depend on the rational equivalence class of a divisor, however in case of Spec Z the degree of a non-zero rational function is non-zero. This is remedied by in addition to the intersection theory over Spec Z, considering an analogous theory on the complex points. Here we consider smooth hermitian line bundles and green currents associated to divisors. For (green) currents there is a ∗-product which satisfies properties analogous to the product in ordinary intersection theory. We have tried to present the results in a way that showcases the similarities and the results we use in arithmetic intersection theory boil down to similar statements holding for both the intersection product and the ∗-product. The other thing we are interested in is heights. In diophantine geometry heights are used to control the number of rational points, they are used for finiteness statements or describing distributions of infinitely many points for example. First we use the arithmetic intersection theory from section 3 to define a global height for arithmetic varieties. Next in section 4 we work with limits of models in the style of Zhang to accomplish a number of things. First by considering p-adic norms the treatment of the finite primes and the infinite prime become more similar. Second by considering limits of models we enlarge the norms and intersection numbers available to us, for example metrics at infinity don’t have to be smooth anymore. We define local heights for each prime p and show that these converge under some assumptions on the line bundles. We can decompose the global height as a sum of local heights, the global height also converges under some assumptions. We also consider metrics associated to an algebraic dynamical system, i.e. we have a surjective morphism f : X → X of a smooth integral projective variety over Q such that f ∗L ∼= L ⊗d for some line bundle L and some d > 0. By a limit argument we obtain a metric on L that is invariant under f ∗ . In section 5 we apply this when X is an abelian variety, f is multiplication by n > 1 and L is a symmetric line bundle. The height obtained from the invariant metric in this case is the Neron-Tate height and we prove some of its elementary properties.Show less
Let G be a locally compact Hausdorff group and let ν be a fixed Haar measure on G. If µ is a bounded positive Radon measure on G and f ∈ L p (G, ν), their convolution product µ ∗ f exists. Thus...Show moreLet G be a locally compact Hausdorff group and let ν be a fixed Haar measure on G. If µ is a bounded positive Radon measure on G and f ∈ L p (G, ν), their convolution product µ ∗ f exists. Thus every bounded Radon measure acts on L p (G, ν) as a convolution operator. It is easy to see that the action of a bounded Radon measure on L p (G, ν) gives rise to an injective Banach algebra homomorphism from the Banach algebra of bounded Radon measures on G to the Banach algebra of regular operators from L p (G, ν) to L p (G, ν). We will investigate when this algebra homomorphism is also a (isometric) lattice isomorphism. For each a ∈ G we have a right translation operator ρa on L p (G, ν). The regular commutant of the right translation operators is the set of regular operators T from L p (G, ν) to L p (G, ν) such that ρaT = T ρa for all a ∈ G. We will show that the regular commutant of the right translation operators is also a Banach lattice. The bounded Radon measures on G also form a Banach lattice. It is then investigated when the injective Banach algebra homomorphism is actually a lattice isomorphism from the bounded Radon measures on G onto the regular commutant of the right translation operators. This is the main result of this thesis. We start by showing that each positive regular operator in the regular commutant of the right translation operators is in fact a convolution operator related to a positive Radon measure. For p = 1, we improve the result by showing that the Radon measure must be bounded and we show that our algebra isomorphism is then an isometric lattice isomorphism. For p ∈ (1, ∞) the same results follow under the extra assumption that G is also amenable.Show less
In this thesis, we will consider two versions of a discount single server system with exponentially distributed arrivals and departures on a discrete time scale. In the first model, to be discussed...Show moreIn this thesis, we will consider two versions of a discount single server system with exponentially distributed arrivals and departures on a discrete time scale. In the first model, to be discussed in Chapter 1, we add a reward for every customer who has been served, and a fee for each time step a customer is in service. We also add arrival control. The model is described in more detail in Section 1.1. The goal is to find an optimal strategy that minimizes the expected total discounted cost (Section 1.2). To find this strategy, we use a direct, explicit algorithm called value iteration. This strategy turns out to be a threshold strategy: a given amount of customers is allowed to enter the system, and once this amount of customers is present, any newly arriving customer is refused. Furthermore, we prove monotonicity properties of the model in Section 1.3. Lastly, the n-horizon cost function of value iteration is decomposed into a subsequent application of different operators (Section 1.4). In the second chapter, the model of Chapter 1 is used, with the addition of departure control. We add an extra, faster server, so we have a slower Server 1 and a faster Server 2, and we can choose at any transition moment which server to use. Using Server 2 involves extra costs per time unit. A detailed description of the model can be found in Section 2.1. In this model, we again want to find an optimal policy (Section 2.3), but now we first rewrite the explicit algorithm as a consecutive application of operators (Section 2.2). It turns out that we can split these operators into operators deciding whether or not an incoming customer should be accepted, and operators deciding which server to use. This was a surprising result, as we did not know in advance that the two decisions can be made independently from each other. Using these operators, we find that the optimal policy is a two-dimensional threshold strategy. In Section 2.4, we give a theorem on the relationship between the two thresholds. Furthermore, we prove the convergence of the thresholds, and thus the strategy, and of the nhorizon relative cost function, by using two initial functions such that one approaches the optimal value function from below and the other from above (Section 2.5). This shows that the choice of the starting functions affects the results, which was unknown so far. Finally, we prove that such functions exist and give a numeric example to demonstrate the results graphically (Section 2.6).Show less
To determine an optimal strategy for a multi-class server, with impatient customers, can be difficult. There are heuristics, such as the Whittle index, that perform well under restrictive...Show moreTo determine an optimal strategy for a multi-class server, with impatient customers, can be difficult. There are heuristics, such as the Whittle index, that perform well under restrictive conditions. We discuss a few of these heuristics and their merits and drawbacks. To overcome these drawbacks, we experiment with a data-based approach, by generating data using value iteration and predicting a strategy with a neural network. Furthermore, we explore the effect of different parameters on the performance of the neural network, for a particular case. Moreover, we look into more extensive methods such as combining the prediction of multiple neural networks and using neural networks to choose the training data.Show less
Two areas of functional analysis which have been the subject of extensive study are operator theory and ordered vector spaces. While these have developed into flourishing independent areas, each...Show moreTwo areas of functional analysis which have been the subject of extensive study are operator theory and ordered vector spaces. While these have developed into flourishing independent areas, each with their own experts, it is the author’s impression that the theories have a bit more in common than the literature might suggest. The goal of this thesis is therefore to exhibit connections between the general theories of C ∗ -algebras and ordered vector spaces. Coming into this project, I was already familiar with the theory of C ∗ - algebras, but I had little experience with ordered vector spaces. Therefore most of the theory is motivated from the C ∗ -algebra point of view. The thesis is built up in two parts. In the first part, we take ideas and concepts from the theory of C ∗ -algebras to ordered vector spaces, leading to concepts such as order ideals, order semisimplicity, and order unitisations. Conversely, in the second part we ask questions about the order structure of C ∗ -algebras, with an emphasis on the lattice-like structure of a C ∗ -algebra. Detailed outlines of each of these parts are given below. This thesis is written with a reader in mind having roughly the same background as I had coming into this project. As such, we assume familiarity with graduate level functional analysis and operator theory, as for instance provided by the Dutch MasterMath courses Functional Analysis and Operator Algebras. Concretely, the first part relies heavily on notions such as topological vector spaces, locally convex spaces, the Hahn–Banach theorems, weak and weak-∗ topologies, and the Krein–Milman theorem. Most of the second part only requires familiarity with the basic theory of bounded linear operators on a Hilbert space, and the notion of a C ∗ -algebra. Only in Chapter 8 do we use slightly more advanced concepts from the theory of C ∗ -algebras, such as irreducible representations, the strong operator topology, and Kaplansky’s density theorem. Note that prior knowledge of partially ordered vector spaces is not assumed. The reason for this is purely circumstantial: no courses on this subject are taught at the author’s university (or elsewhere in the Netherlands) at the time of writing. We cannot aim to give a full overview of the theory; the interested reader is encouraged to consult [AT07].Show less