Topic: Mathematics (Page 15)

You are looking at all articles with the topic "Mathematics". We found 223 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

🔗 The Birthday Paradox

🔗 Mathematics 🔗 Statistics

In probability theory, the birthday problem or birthday paradox concerns the probability that, in a set of n randomly chosen people, some pair of them will have the same birthday. By the pigeonhole principle, the probability reaches 100% when the number of people reaches 367 (since there are only 366 possible birthdays, including February 29). However, 99.9% probability is reached with just 70 people, and 50% probability with 23 people. These conclusions are based on the assumption that each day of the year (excluding February 29) is equally probable for a birthday.

Actual birth records show that different numbers of people are born on different days. In this case, it can be shown that the number of people required to reach the 50% threshold is 23 or fewer. For example, if half the people were born on one day and the other half on another day, then any two people would have a 50% chance of sharing a birthday.

It may well seem surprising that a group of just 23 individuals is required to reach a probability of 50% that at least two individuals in the group have the same birthday: this result is perhaps made more plausible by considering that the comparisons of birthday will actually be made between every possible pair of individuals = 23 × 22/2 = 253 comparisons, which is well over half the number of days in a year (183 at most), as opposed to fixing on one individual and comparing his or her birthday to everyone else's. The birthday problem is not a "paradox" in the literal logical sense of being self-contradictory, but is merely unintuitive at first glance.

Real-world applications for the birthday problem include a cryptographic attack called the birthday attack, which uses this probabilistic model to reduce the complexity of finding a collision for a hash function, as well as calculating the approximate risk of a hash collision existing within the hashes of a given size of population.

The history of the problem is obscure. W. W. Rouse Ball indicated (without citation) that it was first discussed by Harold Davenport. However, Richard von Mises proposed an earlier version of what is considered today to be the birthday problem.

Discussed on

🔗 Fractional Fourier transform

🔗 Mathematics

In mathematics, in the area of harmonic analysis, the fractional Fourier transform (FRFT) is a family of linear transformations generalizing the Fourier transform. It can be thought of as the Fourier transform to the n-th power, where n need not be an integer — thus, it can transform a function to any intermediate domain between time and frequency. Its applications range from filter design and signal analysis to phase retrieval and pattern recognition.

The FRFT can be used to define fractional convolution, correlation, and other operations, and can also be further generalized into the linear canonical transformation (LCT). An early definition of the FRFT was introduced by Condon, by solving for the Green's function for phase-space rotations, and also by Namias, generalizing work of Wiener on Hermite polynomials.

However, it was not widely recognized in signal processing until it was independently reintroduced around 1993 by several groups. Since then, there has been a surge of interest in extending Shannon's sampling theorem for signals which are band-limited in the Fractional Fourier domain.

A completely different meaning for "fractional Fourier transform" was introduced by Bailey and Swartztrauber as essentially another name for a z-transform, and in particular for the case that corresponds to a discrete Fourier transform shifted by a fractional amount in frequency space (multiplying the input by a linear chirp) and evaluating at a fractional set of frequency points (e.g. considering only a small portion of the spectrum). (Such transforms can be evaluated efficiently by Bluestein's FFT algorithm.) This terminology has fallen out of use in most of the technical literature, however, in preference to the FRFT. The remainder of this article describes the FRFT.

Discussed on

🔗 Is 0 Odd or Even?

🔗 Mathematics

Zero is an even number. In other words, its parity—the quality of an integer being even or odd—is even. This can be easily verified based on the definition of "even": it is an integer multiple of 2, specifically 0 × 2. As a result, zero shares all the properties that characterize even numbers: for example, 0 is neighbored on both sides by odd numbers, any decimal integer has the same parity as its last digit—so, since 10 is even 0 will be even, and if y is even then y + x has the same parity as x—and x and 0 + x always have the same parity.

Zero also fits into the patterns formed by other even numbers. The parity rules of arithmetic, such as eveneven = even, require 0 to be even. Zero is the additive identity element of the group of even integers, and it is the starting case from which other even natural numbers are recursively defined. Applications of this recursion from graph theory to computational geometry rely on zero being even. Not only is 0 divisible by 2, it is divisible by every power of 2, which is relevant to the binary numeral system used by computers. In this sense, 0 is the "most even" number of all.

Among the general public, the parity of zero can be a source of confusion. In reaction time experiments, most people are slower to identify 0 as even than 2, 4, 6, or 8. Some students of mathematics—and some teachers—think that zero is odd, or both even and odd, or neither. Researchers in mathematics education propose that these misconceptions can become learning opportunities. Studying equalities like 0 × 2 = 0 can address students' doubts about calling 0 a number and using it in arithmetic. Class discussions can lead students to appreciate the basic principles of mathematical reasoning, such as the importance of definitions. Evaluating the parity of this exceptional number is an early example of a pervasive theme in mathematics: the abstraction of a familiar concept to an unfamiliar setting.

Discussed on

🔗 1, 2, 4, 8, 16, 31

🔗 Mathematics

In geometry, the problem of dividing a circle into areas by means of an inscribed polygon with n sides in such a way as to maximise the number of areas created by the edges and diagonals, sometimes called Moser's circle problem, has a solution by an inductive method. The greatest possible number of regions, rG = ( n
4
 ) + ( n
2
 ) + 1
, giving the sequence 1, 2, 4, 8, 16, 31, 57, 99, 163, 256, ... (OEIS: A000127). Though the first five terms match the geometric progression 2n − 1, it diverges at n = 6, showing the risk of generalising from only a few observations.

Discussed on

🔗 Gini coefficient

🔗 Mathematics 🔗 Economics 🔗 Statistics 🔗 Sociology 🔗 Globalization

In economics, the Gini coefficient ( JEE-nee), sometimes called the Gini index or Gini ratio, is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation's residents, and is the most commonly used measurement of inequality. It was developed by the Italian statistician and sociologist Corrado Gini and published in his 1912 paper Variability and Mutability (Italian: Variabilità e mutabilità).

The Gini coefficient measures the inequality among values of a frequency distribution (for example, levels of income). A Gini coefficient of zero expresses perfect equality, where all values are the same (for example, where everyone has the same income). A Gini coefficient of one (or 100%) expresses maximal inequality among values (e.g., for a large number of people, where only one person has all the income or consumption, and all others have none, the Gini coefficient will be very nearly one). For larger groups, values close to one are very unlikely in practice. Given the normalization of both the cumulative population and the cumulative share of income used to calculate the Gini coefficient, the measure is not overly sensitive to the specifics of the income distribution, but rather only on how incomes vary relative to the other members of a population. The exception to this is in the redistribution of income resulting in a minimum income for all people. When the population is sorted, if their income distribution were to approximate a well-known function, then some representative values could be calculated.

The Gini coefficient was proposed by Gini as a measure of inequality of income or wealth. For OECD countries, in the late 20th century, considering the effect of taxes and transfer payments, the income Gini coefficient ranged between 0.24 and 0.49, with Slovenia being the lowest and Mexico the highest. African countries had the highest pre-tax Gini coefficients in 2008–2009, with South Africa the world's highest, variously estimated to be 0.63 to 0.7, although this figure drops to 0.52 after social assistance is taken into account, and drops again to 0.47 after taxation. The global income Gini coefficient in 2005 has been estimated to be between 0.61 and 0.68 by various sources.

There are some issues in interpreting a Gini coefficient. The same value may result from many different distribution curves. The demographic structure should be taken into account. Countries with an aging population, or with a baby boom, experience an increasing pre-tax Gini coefficient even if real income distribution for working adults remains constant. Scholars have devised over a dozen variants of the Gini coefficient.

Discussed on

🔗 Hilbert's 24th problem

🔗 Mathematics

Hilbert's twenty-fourth problem is a mathematical problem that was not published as part of the list of 23 problems known as Hilbert's problems but was included in David Hilbert's original notes. The problem asks for a criterion of simplicity in mathematical proofs and the development of a proof theory with the power to prove that a given proof is the simplest possible.

The 24th problem was rediscovered by German historian Rüdiger Thiele in 2000, noting that Hilbert did not include the 24th problem in the lecture presenting Hilbert's problems or any published texts. Hilbert's friends and fellow mathematicians Adolf Hurwitz and Hermann Minkowski were closely involved in the project but did not have any knowledge of this problem.

This is the full text from Hilbert's notes given in Rüdiger Thiele's paper. The section was translated by Rüdiger Thiele.

The 24th problem in my Paris lecture was to be: Criteria of simplicity, or proof of the greatest simplicity of certain proofs. Develop a theory of the method of proof in mathematics in general. Under a given set of conditions there can be but one simplest proof. Quite generally, if there are two proofs for a theorem, you must keep going until you have derived each from the other, or until it becomes quite evident what variant conditions (and aids) have been used in the two proofs. Given two routes, it is not right to take either of these two or to look for a third; it is necessary to investigate the area lying between the two routes. Attempts at judging the simplicity of a proof are in my examination of syzygies and syzygies [Hilbert misspelled the word syzygies] between syzygies (see Hilbert 42, lectures XXXII–XXXIX). The use or the knowledge of a syzygy simplifies in an essential way a proof that a certain identity is true. Because any process of addition [is] an application of the commutative law of addition etc. [and because] this always corresponds to geometric theorems or logical conclusions, one can count these [processes], and, for instance, in proving certain theorems of elementary geometry (the Pythagoras theorem, [theorems] on remarkable points of triangles), one can very well decide which of the proofs is the simplest. [Author's note: Part of the last sentence is not only barely legible in Hilbert's notebook but also grammatically incorrect. Corrections and insertions that Hilbert made in this entry show that he wrote down the problem in haste.]

Discussed on

🔗 Green–Tao theorem

🔗 Mathematics

In number theory, the Green–Tao theorem, proved by Ben Green and Terence Tao in 2004, states that the sequence of prime numbers contains arbitrarily long arithmetic progressions. In other words, for every natural number k, there exist arithmetic progressions of primes with k terms. The proof is an extension of Szemerédi's theorem. The problem can be traced back to investigations of Lagrange and Waring from around 1770.

Discussed on

🔗 The Art Gallery Problem

🔗 Mathematics

The art gallery problem or museum problem is a well-studied visibility problem in computational geometry. It originates from a real-world problem of guarding an art gallery with the minimum number of guards who together can observe the whole gallery. In the geometric version of the problem, the layout of the art gallery is represented by a simple polygon and each guard is represented by a point in the polygon. A set S {\displaystyle S} of points is said to guard a polygon if, for every point p {\displaystyle p} in the polygon, there is some q S {\displaystyle q\in S} such that the line segment between p {\displaystyle p} and q {\displaystyle q} does not leave the polygon.

Discussed on

🔗 Ask HN: using only static magnetism - impossible to stably levitate against gravity?

🔗 Mathematics 🔗 Physics

Earnshaw's theorem states that a collection of point charges cannot be maintained in a stable stationary equilibrium configuration solely by the electrostatic interaction of the charges. This was first proven by British mathematician Samuel Earnshaw in 1842. It is usually referenced to magnetic fields, but was first applied to electrostatic fields.

Earnshaw's theorem applies to classical inverse-square law forces (electric and gravitational) and also to the magnetic forces of permanent magnets, if the magnets are hard (the magnets do not vary in strength with external fields). Earnshaw's theorem forbids magnetic levitation in many common situations.

If the materials are not hard, Braunbeck's extension shows that materials with relative magnetic permeability greater than one (paramagnetism) are further destabilising, but materials with a permeability less than one (diamagnetic materials) permit stable configurations.

Discussed on

🔗 Fractal Interpolation

🔗 Mathematics 🔗 Systems 🔗 Systems/Chaos theory

Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image.

Discussed on