Topic: Mathematics (Page 9)

You are looking at all articles with the topic "Mathematics". We found 223 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Bakhshali Manuscript

πŸ”— Mathematics πŸ”— Books πŸ”— India πŸ”— Pakistan πŸ”— India/Indian history workgroup

The Bakhshali manuscript is an ancient Indian mathematical text written on birch bark that was found in 1881 in the village of Bakhshali, Mardan (near Peshawar in present-day Pakistan, historical Gandhara). It is perhaps "the oldest extant manuscript in Indian mathematics". For some portions a carbon-date was proposed of AD 224–383 while for other portions a carbon-date as late as AD 885–993 in a recent study, but the dating has been criticised by specialists on methodological grounds (Plofker et al. 2017 and Houben 2018 Β§3). The manuscript contains the earliest known Indian use of a zero symbol. It is written in a form of literary Sanskrit influenced by contemporary dialects.

Discussed on

πŸ”— Tupper's formula

πŸ”— Computer science πŸ”— Mathematics

Tupper's self-referential formula is a formula that visually represents itself when graphed at a specific location in the (x, y) plane.

Discussed on

πŸ”— Shoelace formula

πŸ”— Mathematics

The shoelace formula or shoelace algorithm (also known as Gauss's area formula and the surveyor's formula) is a mathematical algorithm to determine the area of a simple polygon whose vertices are described by their Cartesian coordinates in the plane. The user cross-multiplies corresponding coordinates to find the area encompassing the polygon, and subtracts it from the surrounding polygon to find the area of the polygon within. It is called the shoelace formula because of the constant cross-multiplying for the coordinates making up the polygon, like tying shoelaces. It is also sometimes called the shoelace method. It has applications in surveying and forestry, among other areas.

The formula was described by Meister (1724–1788) in 1769 and by Gauss in 1795. It can be verified by dividing the polygon into triangles, and can be considered to be a special case of Green's theorem.

The area formula is derived by taking each edge AB, and calculating the area of triangle ABO with a vertex at the origin O, by taking the cross-product (which gives the area of a parallelogram) and dividing by 2. As one wraps around the polygon, these triangles with positive and negative area will overlap, and the areas between the origin and the polygon will be cancelled out and sum to 0, while only the area inside the reference triangle remains. This is why the formula is called the surveyor's formula, since the "surveyor" is at the origin; if going counterclockwise, positive area is added when going from left to right and negative area is added when going from right to left, from the perspective of the origin.

The area formula can also be applied to self-overlapping polygons since the meaning of area is still clear even though self-overlapping polygons are not generally simple. Furthermore, a self-overlapping polygon can have multiple "interpretations" but the Shoelace formula can be used to show that the polygon's area is the same regardless of the interpretation.

Discussed on

πŸ”— Risch Algorithm for Symbolic Integration

πŸ”— Mathematics

In symbolic computation (or computer algebra), at the intersection of mathematics and computer science, the Risch algorithm is an algorithm for indefinite integration. It is used in some computer algebra systems to find antiderivatives. It is named after the American mathematician Robert Henry Risch, a specialist in computer algebra who developed it in 1968.

The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational functions, radicals, logarithms, and exponential functions. Risch called it a decision procedure, because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral.

The complete description of the Risch algorithm takes over 100 pages. The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman.

Discussed on

πŸ”— Gray Code

πŸ”— Computing πŸ”— Mathematics

The reflected binary code (RBC), also known just as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit). The reflected binary code was originally designed to prevent spurious output from electromechanical switches. Today, Gray codes are widely used to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems.

Discussed on

πŸ”— Kahan Summation Algorithm

πŸ”— Computer science πŸ”— Mathematics

In numerical analysis, the Kahan summation algorithm, also known as compensated summation, significantly reduces the numerical error in the total obtained by adding a sequence of finite-precision floating-point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as n {\displaystyle {\sqrt {n}}} for random inputs (the roundoff errors form a random walk). With compensated summation, the worst-case error bound is effectively independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.

The algorithm is attributed to William Kahan. Similar, earlier techniques are, for example, Bresenham's line algorithm, keeping track of the accumulated error in integer operations (although first documented around the same time) and the delta-sigma modulation (integrating, not just summing the error).

Discussed on

πŸ”— Haversine Formula

πŸ”— Mathematics πŸ”— Geography

The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles.

The first table of haversines in English was published by James Andrew in 1805, but Florian Cajori credits an earlier use by JosΓ© de Mendoza y RΓ­os in 1801. The term haversine was coined in 1835 by James Inman.

These names follow from the fact that they are customarily written in terms of the haversine function, given by hav(ΞΈ) = sin2(ΞΈ/2). The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in nineteenth and early twentieth century navigation and trigonometric texts. These days, the haversine form is also convenient in that it has no coefficient in front of the sin2 function.

Discussed on

πŸ”— Fair Cake-Cutting

πŸ”— Mathematics πŸ”— Game theory

Fair cake-cutting is a kind of fair division problem. The problem involves a heterogeneous resource, such as a cake with different toppings, that is assumed to be divisible – it is possible to cut arbitrarily small pieces of it without destroying their value. The resource has to be divided among several partners who have different preferences over different parts of the cake, i.e., some people prefer the chocolate toppings, some prefer the cherries, some just want as large a piece as possible. The division should be unanimously fair – each person should receive a piece believed to be a fair share.

The "cake" is only a metaphor; procedures for fair cake-cutting can be used to divide various kinds of resources, such as land estates, advertisement space or broadcast time.

The prototypical procedure for fair cake-cutting is divide and choose, which is mentioned in the book of Genesis. It solves the fair division problem for two people. The modern study of fair cake-cutting was initiated during World War II, when Hugo Steinhaus asked his students Stefan Banach and BronisΕ‚aw Knaster to find a generalization of divide-and-choose to three or more people. They developed the last diminisher procedure. Today, fair cake-cutting is the subject of intense research in mathematics, computer science, economics and political science.

Discussed on

πŸ”— The hairy ball theorem

πŸ”— Mathematics

The hairy ball theorem of algebraic topology (sometimes called the hedgehog theorem in Europe) states that there is no nonvanishing continuous tangent vector field on even-dimensional n-spheres. For the ordinary sphere, or 2‑sphere, if f is a continuous function that assigns a vector in R3 to every point p on a sphere such that f(p) is always tangent to the sphere at p, then there is at least one p such that f(p) = 0. The theorem was first stated by Henri PoincarΓ© in the late 19th century, and first proven in 1912 by Luitzen Egbertus Jan Brouwer.

The theorem has been expressed colloquially as "you can't comb a hairy ball flat without creating a cowlick" or "you can't comb the hair on a coconut".

Discussed on

πŸ”— Anscombe's Quartet

πŸ”— Mathematics πŸ”— Statistics

Anscombe's quartet comprises four data sets that have nearly identical simple descriptive statistics, yet have very different distributions and appear very different when graphed. Each dataset consists of eleven (x,y) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data when analyzing it, and the effect of outliers and other influential observations on statistical properties. He described the article as being intended to counter the impression among statisticians that "numerical calculations are exact, but graphs are rough."

Discussed on