Topic: Computer science (Page 7)
You are looking at all articles with the topic "Computer science". We found 123 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
🔗 Fast inverse square root
Fast inverse square root, sometimes referred to as Fast InvSqrt() or by the hexadecimal constant 0x5F3759DF, is an algorithm that estimates 1⁄√x, the reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x in IEEE 754 floating-point format. This operation is used in digital signal processing to normalize a vector, i.e., scale it to length 1. For example, computer graphics programs use inverse square roots to compute angles of incidence and reflection for lighting and shading. The algorithm is best known for its implementation in 1999 in the source code of Quake III Arena, a first-person shooter video game that made heavy use of 3D graphics. The algorithm only started appearing on public forums such as Usenet in 2002 or 2003. At the time, it was generally computationally expensive to compute the reciprocal of a floating-point number, especially on a large scale; the fast inverse square root bypassed this step.
The algorithm accepts a 32-bit floating-point number as the input and stores a halved value for later use. Then, treating the bits representing the floating-point number as a 32-bit integer, a logical shift right by one bit is performed and the result subtracted from the number 0x5F3759DF, which is a floating point representation of an approximation of √2127. This results in the first approximation of the inverse square root of the input. Treating the bits again as a floating-point number, it runs one iteration of Newton's method, yielding a more precise approximation.
The algorithm was originally attributed to John Carmack, but an investigation showed that the code had deeper roots in both the hardware and software side of computer graphics. Adjustments and alterations passed through both Silicon Graphics and 3dfx Interactive, with Gary Tarolli's implementation for the SGI Indigo as the earliest known use. It is not known how the constant was originally derived, though investigation has shed some light on possible methods.
With subsequent hardware advancements, especially the x86 SSE instruction rsqrtss
, this method is not generally applicable to modern computing, though it remains an interesting example both historically and for more limited machines.
Discussed on
- "Fast inverse square root" | 2017-01-24 | 17 Upvotes 4 Comments
- "Fast inverse square root" | 2009-10-22 | 37 Upvotes 20 Comments
🔗 Homotopy Type Theory
In mathematical logic and computer science, homotopy type theory (HoTT ) refers to various lines of development of intuitionistic type theory, based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.
This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language) for abstract homotopy theory and higher category theory; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants.
There is a large overlap between the work referred to as homotopy type theory, and as the univalent foundations project. Although neither is precisely delineated, and the terms are sometimes used interchangeably, the choice of usage also sometimes corresponds to differences in viewpoint and emphasis. As such, this article may not represent the views of all researchers in the fields equally. This kind of variability is unavoidable when a field is in rapid flux.
Discussed on
- "Homotopy Type Theory" | 2021-06-22 | 64 Upvotes 22 Comments
🔗 Ant Colony Optimization Algorithms
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial Ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of Artificial Ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing. The burgeoning activity in this field has led to conferences dedicated solely to Artificial Ants, and to numerous commercial applications by specialized companies such as AntOptima.
As an example, Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial 'ants' (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated 'ants' similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions. One variation on this approach is the bees algorithm, which is more analogous to the foraging patterns of the honey bee, another social insect.
This algorithm is a member of the ant colony algorithms family, in swarm intelligence methods, and it constitutes some metaheuristic optimizations. Initially proposed by Marco Dorigo in 1992 in his PhD thesis, the first algorithm was aiming to search for an optimal path in a graph, based on the behavior of ants seeking a path between their colony and a source of food. The original idea has since diversified to solve a wider class of numerical problems, and as a result, several problems have emerged, drawing on various aspects of the behavior of ants. From a broader perspective, ACO performs a model-based search and shares some similarities with estimation of distribution algorithms.
Discussed on
- "Ant Colony Optimization Algorithms" | 2023-07-20 | 13 Upvotes 1 Comments
- "Ant Colony Optimization Algorithms" | 2014-05-03 | 56 Upvotes 14 Comments
🔗 Fixed-Point Combinator
In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator),: p.26 is a higher-order function (i.e. a function which takes a function as argument) that returns some fixed point (a value that is mapped to itself) of its argument function, if one exists.
Formally, if is a fixed-point combinator and the function has one or more fixed points, then is one of these fixed points, i.e.
Fixed-point combinators can be defined in the lambda calculus and in functional programming languages and provide a means to allow for recursive definitions.
Discussed on
- "Fixed-Point Combinator" | 2024-02-23 | 54 Upvotes 26 Comments
🔗 Curry–Howard correspondence
In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.
It is a generalization of a syntactic analogy between systems of formal logic and computational calculi that was first discovered by the American mathematician Haskell Curry and logician William Alvin Howard. It is the link between logic and computation that is usually attributed to Curry and Howard, although the idea is related to the operational interpretation of intuitionistic logic given in various formulations by L. E. J. Brouwer, Arend Heyting and Andrey Kolmogorov (see Brouwer–Heyting–Kolmogorov interpretation) and Stephen Kleene (see Realizability). The relationship has been extended to include category theory as the three-way Curry–Howard–Lambek correspondence.
Discussed on
- "Curry–Howard correspondence" | 2018-08-13 | 55 Upvotes 22 Comments
🔗 Program Synthesis
In computer science, program synthesis is the task to construct a program that provably satisfies a given high-level formal specification. In contrast to program verification, the program is to be constructed rather than given; however, both fields make use of formal proof techniques, and both comprise approaches of different degrees of automation. In contrast to automatic programming techniques, specifications in program synthesis are usually non-algorithmic statements in an appropriate logical calculus.
The primary application of program synthesis is to relieve the programmer of the burden of writing correct, efficient code that satisfies a specification. However, program synthesis also has applications to superoptimization and inference of loop invariants.
Discussed on
- "Program Synthesis" | 2024-04-13 | 54 Upvotes 19 Comments
🔗 Schönhage–Strassen Algorithm
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971. The run-time bit complexity is, in Big O notation, for two n-digit numbers. The algorithm uses recursive fast Fourier transforms in rings with 2n+1 elements, a specific type of number theoretic transform.
The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007, when a new method, Fürer's algorithm, was announced with lower asymptotic complexity; however, Fürer's algorithm currently only achieves an advantage for astronomically large values and is used only in Basic Polynomial Algebra Subprograms (BPAS) (see Galactic algorithms).
In practice the Schönhage–Strassen algorithm starts to outperform older methods such as Karatsuba and Toom–Cook multiplication for numbers beyond 2215 to 2217 (10,000 to 40,000 decimal digits). The GNU Multi-Precision Library uses it for values of at least 1728 to 7808 64-bit words (33,000 to 150,000 decimal digits), depending on architecture. There is a Java implementation of Schönhage–Strassen which uses it above 74,000 decimal digits.
Applications of the Schönhage–Strassen algorithm include mathematical empiricism, such as the Great Internet Mersenne Prime Search and computing approximations of π, as well as practical applications such as Kronecker substitution, in which multiplication of polynomials with integer coefficients can be efficiently reduced to large integer multiplication; this is used in practice by GMP-ECM for Lenstra elliptic curve factorization.
Discussed on
- "Schönhage–Strassen Algorithm" | 2022-07-14 | 58 Upvotes 7 Comments
🔗 Cell (microprocessor)
Cell is a multi-core microprocessor microarchitecture that combines a general-purpose PowerPC core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.
It was developed by Sony, Toshiba, and IBM, an alliance known as "STI". The architectural design and first implementation were carried out at the STI Design Center in Austin, Texas over a four-year period beginning March 2001 on a budget reported by Sony as approaching US$400 million. Cell is shorthand for Cell Broadband Engine Architecture, commonly abbreviated CBEA in full or Cell BE in part.
The first major commercial application of Cell was in Sony's PlayStation 3 game console, released in 2006. In May 2008, the Cell-based IBM Roadrunner supercomputer became the first TOP500 LINPACK sustained 1.0 petaflops system. Mercury Computer Systems also developed designs based on the Cell.
The Cell architecture includes a memory coherence architecture that emphasizes power efficiency, prioritizes bandwidth over low latency, and favors peak computational throughput over simplicity of program code. For these reasons, Cell is widely regarded as a challenging environment for software development. IBM provides a Linux-based development platform to help developers program for Cell chips.
Discussed on
- "Cell (microprocessor)" | 2015-10-21 | 40 Upvotes 24 Comments
🔗 Five Minute Rule
In computer science, the five-minute rule is a rule of thumb for deciding whether a data item should be kept in memory, or stored on disk and read back into memory when required. It was first formulated by Jim Gray and Gianfranco Putzolu in 1985, and then subsequently revised in 1997 and 2007 to reflect changes in the relative cost and performance of memory and persistent storage.
The rule is as follows:
The 5-minute random rule: cache randomly accessed disk pages that are re-used every 5 minutes or less.
Gray also issued a counterpart one-minute rule for sequential access:
The 1-minute rule: cache sequentially accessed disk pages that are re-used every 1 minute or less.
Although the 5-minute rule was invented in the realm of databases, it has also been applied elsewhere, for example, in Network File System cache capacity planning.
The original 5-minute rule was derived from the following cost-benefit computation:
- BreakEvenIntervalinSeconds = (PagesPerMBofRAM / AccessesPerSecondPerDisk) × (PricePerDiskDrive / PricePerMBofRAM)
Applying it to 2007 data yields approximately a 90-minutes interval for magnetic-disk-to-DRAM caching, 15 minutes for SSD-to-DRAM caching and 21⁄4 hours for disk-to-SSD caching. The disk-to-DRAM interval was thus a bit short of what Gray and Putzolu anticipated in 1987 as the "five-hour rule" was going to be in 2007 for RAM and disks.
According to calculations by NetApp engineer David Dale as reported in The Register, the figures for disc-to-DRAM caching in 2008 were as follows: "The 50KB page break-even was five minutes, the 4KB one was one hour and the 1KB one was five hours. There needed to be a 50-fold increase in page size to cache for break-even at five minutes." Regarding disk-to-SSD caching in 2010, the same source reported that "A 250KB page break even with SLC was five minutes, but five hours with a 4KB page size. It was five minutes with a 625KB page size with MLC flash and 13 hours with a 4KB MLC page size."
In 2000, Gray and Shenoy applied a similar calculation for web page caching and concluded that a browser should "cache web pages if there is any chance they will be re-referenced within their lifetime."
Discussed on
- "Five Minute Rule" | 2015-10-06 | 51 Upvotes 12 Comments
🔗 Esterel – Synchronous programming language for complex, reactive systems
Esterel is a synchronous programming language for the development of complex reactive systems. The imperative programming style of Esterel allows the simple expression of parallelism and preemption. As a consequence, it is well suited for control-dominated model designs.
The development of the language started in the early 1980s, and was mainly carried out by a team of Ecole des Mines de Paris and INRIA led by Gérard Berry in France. Current compilers take Esterel programs and generate C code or hardware (RTL) implementations (VHDL or Verilog).
The language is still under development, with several compilers out. The commercial version of Esterel is the development environment Esterel Studio. The company that commercialize it (Synfora) initiated a normalization process with the IEEE in April 2007 however the working group (P1778) dissolved March 2011. The Esterel v7 Reference Manual Version v7 30 – initial IEEE standardization proposal is publicly available.
Discussed on
- "Esterel – Synchronous programming language for complex, reactive systems" | 2019-12-27 | 56 Upvotes 5 Comments