In this thesis we consider possible chaotic behavior of the stationary solutions of a coupled system of two partial differential equations. One of these PDE’s is closely related to the complex...Show moreIn this thesis we consider possible chaotic behavior of the stationary solutions of a coupled system of two partial differential equations. One of these PDE’s is closely related to the complex GinzburgLandau equation; the other is a diffusion equation. First, some background and applications of this system are given. After rescaling and some simplifications, we uncouple the system and look at the solution structure of the separate parts. The part which is related to the Ginzburg-Landau equation contains, for a certain choice of coefficients, a homoclinic orbit. Then, we consider the coupled system and analyze what will happen to the homoclinic orbit. In order to do so, we recall the Melnikov theory, which is used to calculate the break up of the homoclinic orbit. If the Melnikov function equals zero and its derivative is nonzero, there will be a transverse homoclinic orbit. The existence of a transverse homoclinic orbit will give rise to chaotic behavior of the dynamical system and the theoretical background of this is described in detail. Finally, by applying the Melnikov theory to our system, we establish the possibility of a transverse homoclinic orbit and hence the possibility of chaos.Show less
In this thesis I’ll discuss Conditional Independencies of Joint Probability Distributions (here after called CI’s respectively JPD’s) over a finite set of discrete random variables. Remember that...Show moreIn this thesis I’ll discuss Conditional Independencies of Joint Probability Distributions (here after called CI’s respectively JPD’s) over a finite set of discrete random variables. Remember that for any such JPD we can write down a list of all CI’s, between two subsets of variables given a third. Such a list is called a CI-trace. An arbitrary list of CI’s is called a CI-pattern, without a priori knowing if there will exist a corresponding JPD with this CI-pattern. For simplicity and without loss of generality we take all JPD’s over n + 1 variables and label them by the integers 0, 1, . . . , n. A CI-trace now becomes a set of triples consisting of subsets of [n], the random variables (with [n] I denote the set {0, 1, . . . , n}). For example (A, B, C) with A, B and C ⊂ [n] is such a triple, it can also be denoted as A⊥B|C, which means that the random variables of A are independent of the random variables of B given any outcome on the random variables of C. It was believed that CI-traces could be characterised by some finite set of rules, called Conditional Independence rules, CI-rule. Such a CI-rule would state that if a CI-trace contains a certain pattern of triplets it should also contain a certain other triple. Furthermore such a pattern of a CI-rule should itself be finite; it should consist of k CI’s, called the antecedents that would validate another k + 1’th CI, called the consequent. The order of a CI-rule is the number k of its antecedents. This idea would imply that the set of all CI-traces is equal to the set of all CI-patterns closed under the CI-rules. In 1992 Milan Studen´y wrote an article on this subject called Conditional Independence Relations have no finite complete characterisation. He proved that such a characterisation is not possible. Now the main goal of my thesis was to understand this article and to work out a readable version of the theorem and the proof. The proof is based on two major parts. First of all the existence of a particular JPD and its CI-pattern on n + 1 variables and secondly on a proposition about CI-patterns based on entropies. The remainder of my thesis will contain sections on these two major parts, Studen´y’s theorem and a small summary of the changes I made.Show less
This Bachelor’s thesis is about the complexity of the multichain classification problem. The problem is to detect whether a given Markov decision process is unichain or multichain. A Markov...Show moreThis Bachelor’s thesis is about the complexity of the multichain classification problem. The problem is to detect whether a given Markov decision process is unichain or multichain. A Markov decision process is unichain if the corresponding Markov chain contains only one recurrent class (and a possibly empty set of transient states) for every strategy, otherwise it is multichain. We first show that the general case is NP-complete. A polynomial algorithm is given for Markov decision processes that contain either a state which is recurrent for all strategies or a state which is absorbing under some strategy. The deterministic case is considered to be polynomial, but we only give an outline of the algorithm. We will provide a complete polynomial algorithm to reduce the problem for deterministic Markov decision processes. At last we will discuss some other polynomial algorithms, including an algorithm that reduces the multichain classification problem for a general Markov decision process in polynomial time to a multichain classification problem for a communicating Markov decision process (for every pair of states i, j there is a strategy such that i is reachable from j in the corresponding Markov chain).Show less
In the 40s, Mac Lane and Eilenberg introduced categories. Although by some referred to as abstract nonsense, the idea of categories allows one to talk about mathematical objects and their...Show moreIn the 40s, Mac Lane and Eilenberg introduced categories. Although by some referred to as abstract nonsense, the idea of categories allows one to talk about mathematical objects and their relationions in a general setting. Its origins lie in the field of algebraic topology, one of the topics that will be explored in this thesis. First, a concise introduction to categories will be given. Then, a few examples of categories will be presented. After this, two specific categories will be singled out and treated in more detail, namely the category of π-sets and the category of covering spaces for space X (with certain conditions) with π the fundamental group of X. The main theorem that will be proved is that these two categories are “equivalent”. This means that we can translate problems from one category, in this case the category of covering spaces, to problems in the category of G-sets. In certain instances this proves to be fruitful as certain problems are more easily solved algebraically than topologically. As an application, a slightly weaker form of the famous Seifert-van Kampen theorem will be proved using the equivalence of categories.Show less
We show how the theory of multiplier ideals can be developed and discuss several applications of this theory. In the second section the same theory in the analytic setting is developed and several...Show moreWe show how the theory of multiplier ideals can be developed and discuss several applications of this theory. In the second section the same theory in the analytic setting is developed and several applications are given. Let X be a smooth algebraic variety and D an effective Q-divisor. We associate to D (or to the pair (X, D)) an ideal sheaf I(D) which controls the behavior of the fractional part of D and determines how close it is to have an simple normal crossing support. Other applications can be treated such as singularities of projective hypersurfaces and characterization of divisors. In the former case a result of Esnault-Viehweg concerning the least degree of hypersurfaces with multiplicity greater than or equal to a given positive integer at each point of a finite set is explained and proved in two different ways. A slight generalization is also given. Several vanishing and non-vanishing results including a global generation theorem are treated which will be used to prove the results about singularities. In the second section the analytic analogues of the materials in section one are given and the characterization of analytic nef and good divisors are explained.Show less
In this Bachelor Thesis, we will explain a calculus named Schubert Calculus. Schubert Calculus is invented by Hermann C¨asar Hannibal Schubert around the end of the nineteenth century. This...Show moreIn this Bachelor Thesis, we will explain a calculus named Schubert Calculus. Schubert Calculus is invented by Hermann C¨asar Hannibal Schubert around the end of the nineteenth century. This calculus allowed Schubert and his successors to solve many enumerative problems in geometry, although they didn’t have rigorous proofs of the rules in this calculus. This is the reason why Hilbert’s 15-th problem concerns with this calculus, and nowadays most of the rules in this calculus are finally formalized (through topology and intersection theory). The main purpose of this Bachelor Thesis is to explain the rules of this Schubert Calculus and solve some enumerative problems. The first chapter introduces the Grassmann Variety (mainly from [KL]), and the second chapter gives some basic facts about the cohomology ring of this Grassmann Variety (mainly based on [KL], [FU] and [ST]). In the third and the fifth chapter we will develop the calculus in this cohomology ring (mainly from [KL] and [ST]). The fourth chapter shows the power of the Schubert Calculus by solving several enumerative problems (many of which are new). I have decided not to include complete proofs of the formulae from the second chapter, since the complete proofs I know are very technical (although we will give a sketch). Proofs can be found, for example, in [GH] (although it contains some errors), [FU] (as exercises) and [HP] (but this is hard to read). For more details and proofs of Chapter Five, I suggest to read [FU]. I have also decided not to include (part of) the theory of Schubert Polynomials and Varieties, which is a current area of research, since a detailed introduction can be found in [FU].Show less
This treatise is on simple random walk, and on the way it gives rise to Brownian motion. It was written as my bachelor project, and it was written in such a way that it should serve as a good...Show moreThis treatise is on simple random walk, and on the way it gives rise to Brownian motion. It was written as my bachelor project, and it was written in such a way that it should serve as a good introduction into the subject for students that have as much knowledge as I when I began working on it. That is: a basic probability course, and a little bit of measure theory. To that end, the following track is followed: In section 1, the simple random walk is defined. In section 2, the first major limit property is studied: whether the walk be recurrent or not. Some calculus and the discrete Fourier transform are required to prove the result. In section 3, a second limit property is studied: its range, or, the number of visited sites. In the full proof of the results, the notion of strong and weak convergence presents itself, and also the notion of tail events. To understand these problems more precisely, and as a necessary preparation for Brownian motion, some measure theoretic foundations are treated in section 4. Emphasis is put, not on the formal derivation of the results, but on the right notion of them in our context. In section 5, Brownian motion is studied. First, in what manner simple random walk gives rise to it, and secondly its formal definition. Special care is devoted to explain the exact steps that are needed for its construction, for that is something which I found rather difficult to understand from the texts I read on it.Show less
Economical data collected by Statistics Netherlands usually contains missing items. Various imputation methods are available to fill in these gaps, so that completed datasets can be analyzed using...Show moreEconomical data collected by Statistics Netherlands usually contains missing items. Various imputation methods are available to fill in these gaps, so that completed datasets can be analyzed using standard statistical tools. One of the methods often used, the ratio imputation method, appears not to perform very well if we want the completed data to satisfy certain restrictions. This is our motivation to investigate other imputation methods. We look at several methods that we subdivide over two groups. The first group consists of methods based on models that assume a joint distribution for all variables for an individual, and that these variables are all independent. Here we will discuss methods that assumes the data are truncated normally distributed, or exponentially distributed. We propose the proportional variance method, and investigate various possible underlying models. The second group is made up of methods that only specify certain conditional distributions. Here we will investigate the commonly used ratio imputation method and both the classical and the Bayesian variants of sequential regression imputation methods. After we have discussed these methods, we repeatedly apply them to a dataset provided by Statistics Netherlands in which we make a missing pattern ourselves. We use the results of these simulations to assess the performance of the methods on several criteria.Show less