Topic: Mathematics (Page 8)

You are looking at all articles with the topic "Mathematics". We found 223 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

🔗 Circle Packing

🔗 Mathematics

In geometry, circle packing is the study of the arrangement of circles (of equal or varying sizes) on a given surface such that no overlapping occurs and so that no circle can be enlarged without creating an overlap. The associated packing density, η, of an arrangement is the proportion of the surface covered by the circles. Generalisations can be made to higher dimensions – this is called sphere packing, which usually deals only with identical spheres.

While the circle has a relatively low maximum packing density of 0.9069 on the Euclidean plane, it does not have the lowest possible. The "worst" shape to pack onto a plane is not known, but the smoothed octagon has a packing density of about 0.902414, which is the lowest maximum packing density known of any centrally-symmetric convex shape. Packing densities of concave shapes such as star polygons can be arbitrarily small.

The branch of mathematics generally known as "circle packing" is concerned with the geometry and combinatorics of packings of arbitrarily-sized circles: these give rise to discrete analogs of conformal mapping, Riemann surfaces and the like.

Discussed on

🔗 Brouwer–Hilbert controversy

🔗 Mathematics

In a foundational controversy in twentieth-century mathematics, L. E. J. Brouwer, a supporter of intuitionism, opposed David Hilbert, the founder of formalism.

Discussed on

🔗 Gabriel's Horn

🔗 Mathematics

Gabriel's horn (also called Torricelli's trumpet) is a geometric figure which has infinite surface area but finite volume. The name refers to the Abrahamic tradition identifying the archangel Gabriel as the angel who blows the horn to announce Judgment Day, associating the divine, or infinite, with the finite. The properties of this figure were first studied by Italian physicist and mathematician Evangelista Torricelli in the 17th century.

Discussed on

🔗 Gödel's Completeness Theorem

🔗 Mathematics

Gödel's completeness theorem is a fundamental theorem in mathematical logic that establishes a correspondence between semantic truth and syntactic provability in first-order logic. It makes a close link between model theory that deals with what is true in different models, and proof theory that studies what can be formally proven in particular formal systems.

It was first proved by Kurt Gödel in 1929. It was then simplified in 1947, when Leon Henkin observed in his Ph.D. thesis that the hard part of the proof can be presented as the Model Existence Theorem (published in 1949). Henkin's proof was simplified by Gisbert Hasenjaeger in 1953.

Discussed on

🔗 Curse of dimensionality

🔗 Computing 🔗 Mathematics 🔗 Statistics 🔗 Cognitive science

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.

Cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. This sparsity is problematic for any method that requires statistical significance. In order to obtain a statistically sound and reliable result, the amount of data needed to support the result often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.

Discussed on

🔗 History of the Monte Carlo method

🔗 Computing 🔗 Computer science 🔗 Mathematics 🔗 Physics 🔗 Statistics

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKean–Vlasov processes, kinetic models of gases).

Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carlo–based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.

In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.

In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKean–Vlasov processes, nonlinear filtering equation). In other instances we are given a flow of probability distributions with an increasing level of sampling complexity (path spaces models with an increasing time horizon, Boltzmann–Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.

Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.

Discussed on

🔗 Chaitin's Constant

🔗 Mathematics

In the computer science subfield of algorithmic information theory, a Chaitin constant (Chaitin omega number) or halting probability is a real number that, informally speaking, represents the probability that a randomly constructed program will halt. These numbers are formed from a construction due to Gregory Chaitin.

Although there are infinitely many halting probabilities, one for each method of encoding programs, it is common to use the letter Ω to refer to them as if there were only one. Because Ω depends on the program encoding used, it is sometimes called Chaitin's construction instead of Chaitin's constant when not referring to any specific encoding.

Each halting probability is a normal and transcendental real number that is not computable, which means that there is no algorithm to compute its digits. Indeed, each halting probability is Martin-Löf random, meaning there is not even any algorithm which can reliably guess its digits.

Discussed on

🔗 Erdős–Bacon number

🔗 Mathematics

A person's Erdős–Bacon number is the sum of one's Erdős number—which measures the "collaborative distance" in authoring academic papers between that person and Hungarian mathematician Paul Erdős—and one's Bacon number—which represents the number of links, through roles in films, by which the person is separated from American actor Kevin Bacon. The lower the number, the closer a person is to Erdős and Bacon, which reflects a small world phenomenon in academia and entertainment.

To have a defined Erdős–Bacon number, it is necessary to have both appeared in a film and co-authored an academic paper, although this in and of itself is not sufficient.

Discussed on

🔗 Finite Element Method

🔗 Mathematics 🔗 Physics

The finite element method (FEM) is the most widely used method for solving problems of engineering and mathematical models. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The FEM is a particular numerical method for solving partial differential equations in two or three space variables (i.e., some boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space discretisation in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function.

Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA).

Discussed on

🔗 Von Neumann Universal Constructor

🔗 Mathematics

John von Neumann's universal constructor is a self-replicating machine in a cellular automata (CA) environment. It was designed in the 1940s, without the use of a computer. The fundamental details of the machine were published in von Neumann's book Theory of Self-Reproducing Automata, completed in 1966 by Arthur W. Burks after von Neumann's death.

Von Neumann's goal was to specify an abstract machine which, when run, would replicate itself. In his design, the machine consists of three parts: a 'blueprint' for itself, a mechanism that can read any blueprint and construct the machine (sans blueprint) specified by that blueprint, and a 'copy machine' that can make copies of any blueprint. After the mechanism has been used to construct the machine specified by the blueprint, the copy machine is used to create a copy of that blueprint, and this copy is placed into the new machine, resulting in a working replication of the original machine. Some machines will do this backwards, copying the blueprint and then building a machine.

To define his machine in more detail, von Neumann invented the concept of a cellular automaton. The one he used consists of a two-dimensional grid of cells, each of which can be in one of 29 states at any point in time. At each timestep, each cell updates its state depending on the states of the surrounding cells at the prior timestep. The rules governing these updates are identical for all cells.

The universal constructor is a certain pattern of cell states in this cellular automaton. It contains one line of cells that serve as a 'tape', encoding a sequence of instructions that serve as a 'blueprint' for the machine. The machine reads these instructions one by one and performs the corresponding actions. The instructions direct the machine to use its 'construction arm' to build a copy of the machine, without tape, at some other location in the cell grid. The tape can't contain instructions to build an equally long tape, just as a container can't contain a container of the same size. Therefore, the machine contains a separate 'copy machine' which reads the tape and places a copy into the newly constructed machine. The resulting new machine and tape is identical to the old one, and it proceeds to replicate again.

Discussed on