Topic: Mathematics (Page 2)

You are looking at all articles with the topic "Mathematics". We found 219 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Benford's Law

πŸ”— Mathematics πŸ”— Statistics

Benford's law, also called the Newcomb–Benford law, the law of anomalous numbers, or the first-digit law, is an observation about the frequency distribution of leading digits in many real-life sets of numerical data. The law states that in many naturally occurring collections of numbers, the leading significant digit is likely to be small. For example, in sets that obey the law, the number 1 appears as the leading significant digit about 30% of the time, while 9 appears as the leading significant digit less than 5% of the time. If the digits were distributed uniformly, they would each occur about 11.1% of the time. Benford's law also makes predictions about the distribution of second digits, third digits, digit combinations, and so on.

The graph to the right shows Benford's law for base 10. There is a generalization of the law to numbers expressed in other bases (for example, base 16), and also a generalization from leading 1 digit to leading n digits.

It has been shown that this result applies to a wide variety of data sets, including electricity bills, street addresses, stock prices, house prices, population numbers, death rates, lengths of rivers, physical and mathematical constants. Like other general principles about natural dataβ€”for example the fact that many data sets are well approximated by a normal distributionβ€”there are illustrative examples and explanations that cover many of the cases where Benford's law applies, though there are many other cases where Benford's law applies that resist a simple explanation. It tends to be most accurate when values are distributed across multiple orders of magnitude, especially if the process generating the numbers is described by a power law (which are common in nature).

It is named after physicist Frank Benford, who stated it in 1938 in a paper titled "The Law of Anomalous Numbers", although it had been previously stated by Simon Newcomb in 1881.

Discussed on

πŸ”— Kelly Criterion

πŸ”— Mathematics πŸ”— Statistics

In probability theory and intertemporal portfolio choice, the Kelly criterion (or Kelly strategy or Kelly bet), also known as the scientific gambling method, is a formula for bet sizing that leads almost surely to higher wealth compared to any other strategy in the long run (i.e. approaching the limit as the number of bets goes to infinity). The Kelly bet size is found by maximizing the expected value of the logarithm of wealth, which is equivalent to maximizing the expected geometric growth rate. The Kelly Criterion is to bet a predetermined fraction of assets, and it can seem counterintuitive. It was described by J. L. Kelly Jr, a researcher at Bell Labs, in 1956.

For an even money bet, the Kelly criterion computes the wager size percentage by multiplying the percent chance to win by two, then subtracting one-hundred percent. So, for a bet with a 70% chance to win the optimal wager size is 40% of available funds.

The practical use of the formula has been demonstrated for gambling and the same idea was used to explain diversification in investment management. In the 2000s, Kelly-style analysis became a part of mainstream investment theory and the claim has been made that well-known successful investors including Warren Buffett and Bill Gross use Kelly methods. William Poundstone wrote an extensive popular account of the history of Kelly betting.

Discussed on

πŸ”— Alan Turing's 100th Birthday - Mathematician, logician, cryptanalyst, scientist

πŸ”— Biography πŸ”— Computing πŸ”— Mathematics πŸ”— London πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— England πŸ”— Biography/science and academia πŸ”— Philosophy/Philosophy of science πŸ”— History of Science πŸ”— Computing/Computer science πŸ”— Robotics πŸ”— Philosophy/Philosophers πŸ”— Cryptography πŸ”— LGBT studies/LGBT Person πŸ”— LGBT studies πŸ”— Athletics πŸ”— Greater Manchester πŸ”— Cheshire πŸ”— Cryptography/Computer science πŸ”— Philosophy/Philosophy of mind πŸ”— Molecular and Cell Biology πŸ”— Surrey πŸ”— Running πŸ”— Molecular Biology πŸ”— Molecular Biology/Molecular and Cell Biology

Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was not fully recognised in his home country during his lifetime, due to his homosexuality, and because much of his work was covered by the Official Secrets Act.

During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine.

Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, and in so doing helped win the war. Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14Β million lives.

After the war Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine. The Automatic Computing Engine was one of the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s.

Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment of 1885 had mandated that "gross indecency" was a criminal offence in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.

In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.

Discussed on

πŸ”— 100 prisoners problem

πŸ”— Mathematics

The 100 prisoners problem is a mathematical problem in probability theory and combinatorics. In this problem, 100 numbered prisoners must find their own numbers in one of 100 drawers in order to survive. The rules state that each prisoner may open only 50 drawers and cannot communicate with other prisoners. At first glance, the situation appears hopeless, but a clever strategy offers the prisoners a realistic chance of survival. Danish computer scientist Peter Bro Miltersen first proposed the problem in 2003.

Discussed on

πŸ”— Friendship Paradox

πŸ”— Mathematics πŸ”— Statistics πŸ”— Sociology

The friendship paradox is the phenomenon first observed by the sociologist Scott L. Feld in 1991 that most people have fewer friends than their friends have, on average. It can be explained as a form of sampling bias in which people with greater numbers of friends have an increased likelihood of being observed among one's own friends. In contradiction to this, most people believe that they have more friends than their friends have.

The same observation can be applied more generally to social networks defined by other relations than friendship: for instance, most people's sexual partners have had (on the average) a greater number of sexual partners than they have.

Discussed on

πŸ”— Ramanujan's Lost Notebook

πŸ”— Mathematics πŸ”— India πŸ”— India/Tamil Nadu

Ramanujan's lost notebook is the manuscript in which the Indian mathematician Srinivasa Ramanujan recorded the mathematical discoveries of the last year (1919–1920) of his life. Its whereabouts were unknown to all but a few mathematicians until it was rediscovered by George Andrews in 1976, in a box of effects of G. N. Watson stored at the Wren Library at Trinity College, Cambridge. The "notebook" is not a book, but consists of loose and unordered sheets of paper described as "more than one hundred pages written on 138 sides in Ramanujan's distinctive handwriting. The sheets contained over six hundred mathematical formulas listed consecutively without proofs."

George Andrews and Bruce C. BerndtΒ (2005, 2009, 2012, 2013, 2018) have published several books in which they give proofs for Ramanujan's formulas included in the notebook. Berndt says of the notebook's discovery: "The discovery of this 'Lost Notebook' caused roughly as much stir in the mathematical world as the discovery of Beethoven’s tenth symphony would cause in the musical world."

Discussed on

πŸ”— Coastline Paradox

πŸ”— Mathematics πŸ”— Maps

The coastline paradox is the counterintuitive observation that the coastline of a landmass does not have a well-defined length. This results from the fractal curve-like properties of coastlines, i.e., the fact that a coastline typically has a fractal dimension (which in fact makes the notion of length inapplicable). The first recorded observation of this phenomenon was by Lewis Fry Richardson and it was expanded upon by Benoit Mandelbrot.

The measured length of the coastline depends on the method used to measure it and the degree of cartographic generalization. Since a landmass has features at all scales, from hundreds of kilometers in size to tiny fractions of a millimeter and below, there is no obvious size of the smallest feature that should be taken into consideration when measuring, and hence no single well-defined perimeter to the landmass. Various approximations exist when specific assumptions are made about minimum feature size.

The problem is fundamentally different from the measurement of other, simpler edges. It is possible, for example, to accurately measure the length of a straight, idealized metal bar by using a measurement device to determine that the length is less than a certain amount and greater than another amountβ€”that is, to measure it within a certain degree of uncertainty. The more accurate the measurement device, the closer results will be to the true length of the edge. When measuring a coastline, however, the closer measurement does not result in an increase in accuracyβ€”the measurement only increases in length; unlike with the metal bar, there is no way to obtain a maximum value for the length of the coastline.

In three-dimensional space, the coastline paradox is readily extended to the concept of fractal surfaces whereby the area of a surface varies, depending on the measurement resolution.

Discussed on

πŸ”— Intuitionism

πŸ”— Mathematics πŸ”— Philosophy πŸ”— Philosophy/Logic πŸ”— Philosophy/Epistemology

In the philosophy of mathematics, intuitionism, or neointuitionism (opposed to preintuitionism), is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of fundamental principles claimed to exist in an objective reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed and applied, but are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality.

Discussed on

πŸ”— Zenzizenzizenzic

πŸ”— Mathematics πŸ”— Etymology

Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x8), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde, a 16th-century Welsh writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike); he wrote that it "doeth represent the square of squares squaredly".

At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic, which is a German spelling of the medieval Italian word censo, meaning "squared". Since the square of a square of a number is its fourth power, Recorde used the word zenzizenzic (spelled by him as zenzizenzike) to express it. Some of the terms had prior use in Latin "zenzicubicus", "zensizensicus" and "zensizenzum". Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube, is found in Samuel Jeake's Logisticelogia. Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation,

x 8 = ( ( x 2 ) 2 ) 2 . {\displaystyle x^{8}=\left(\left(x^{2}\right)^{2}\right)^{2}.}

Recorde proposed three mathematical terms by which any power (that is, index or exponent) greater than 1 could be expressed: zenzic, i.e. squared; cubic; and sursolid, i.e. raised to a prime number greater than three, the smallest of which is five. Sursolids were as follows: 5 was the first; 7, the second; 11, the third; 13, the fourth; etc.

Therefore, a number raised to the power of six would be zenzicubic, a number raised to the power of seven would be the second sursolid, hence bissursolid (not a multiple of two and three), a number raised to the twelfth power would be the "zenzizenzicubic" and a number raised to the power of ten would be the square of the (first) sursolid. The fourteenth power was the square of the second sursolid, and the twenty-second was the square of the third sursolid.

Curiously, Jeake's text appears to designate a written exponent of 0 as being equal to an "absolute number, as if it had no Mark", thus using the notation x0 to refer to x alone, while a written exponent of 1, in his text, denotes "the Root of any number", thus using the notation x1 to refer to what is now known to be x0.5.

The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary (OED) has only one citation for it. As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED.

Samuel Jeake the Younger gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick:

Discussed on

πŸ”— Kalman Filter

πŸ”— Mathematics πŸ”— Statistics πŸ”— Systems πŸ”— Robotics πŸ”— Systems/Control theory

In statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. KΓ‘lmΓ‘n, one of the primary developers of its theory.

The Kalman filter has numerous applications in technology. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and dynamically positioned ships. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics. Kalman filters also are one of the main topics in the field of robotic motion planning and control and can be used in trajectory optimization. The Kalman filter also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, use of the Kalman filter supports a realistic model for making estimates of the current state of the motor system and issuing updated commands.

The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It can run in real time, using only the present input measurements and the previously calculated state and its uncertainty matrix; no additional past information is required.

Optimality of the Kalman filter assumes that the errors are Gaussian. In the words of Rudolf E. KΓ‘lmΓ‘n: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.

Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a hidden Markov model where the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Also, Kalman filter has been successfully used in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filter.

Discussed on