Topic: Computing (Page 16)

You are looking at all articles with the topic "Computing". We found 481 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

๐Ÿ”— Curse of dimensionality

๐Ÿ”— Computing ๐Ÿ”— Mathematics ๐Ÿ”— Statistics ๐Ÿ”— Cognitive science

The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces (often with hundreds or thousands of dimensions) that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming.

Cursed phenomena occur in domains such as numerical analysis, sampling, combinatorics, machine learning, data mining and databases. The common theme of these problems is that when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. This sparsity is problematic for any method that requires statistical significance. In order to obtain a statistically sound and reliable result, the amount of data needed to support the result often grows exponentially with the dimensionality. Also, organizing and searching data often relies on detecting areas where objects form groups with similar properties; in high dimensional data, however, all objects appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient.

Discussed on

๐Ÿ”— Hofstadter's Law

๐Ÿ”— Computing ๐Ÿ”— Systems ๐Ÿ”— Business ๐Ÿ”— Computing/Software ๐Ÿ”— Computing/Computer science ๐Ÿ”— Engineering ๐Ÿ”— Systems/Systems engineering

Hofstadter's law is a self-referential adage, coined by Douglas Hofstadter in his book Gรถdel, Escher, Bach: An Eternal Golden Braid (1979) to describe the widely experienced difficulty of accurately estimating the time it will take to complete tasks of substantial complexity:

Hofstadter's Law: It always takes longer than you expect, even when you take into account Hofstadter's Law.

The law is often cited by programmers in discussions of techniques to improve productivity, such as The Mythical Man-Month or extreme programming.

Discussed on

๐Ÿ”— Null Island

๐Ÿ”— Computing ๐Ÿ”— Geography

Null Island is a name for the area around the point where the prime meridian and the equator cross, located in international waters in the Gulf of Guinea (Atlantic Ocean) off the west African coast. In the WGS84 datum, this is at zero degrees latitude and longitude (0ยฐN 0ยฐE), and is the location of a buoy. The name 'Null Island' serves as both a joke based around the suppositional existence of an island there, and also as a name to which coordinates erroneously set to 0,0 are assigned in placenames databases in order to more easily find and fix them. The nearest land is a small islet offshore of Ghana, between Akwidaa and Dixcove at 4ยฐ45โ€ฒ30โ€ณN 1ยฐ58โ€ฒ33โ€ณW, 307.8ย nmi (354.2ย mi; 570.0ย km) to the north.

Discussed on

๐Ÿ”— Turing Tarpit

๐Ÿ”— Computing ๐Ÿ”— Computer science ๐Ÿ”— Computing/Software

A Turing tarpit (or Turing tar-pit) is any programming language or computer interface that allows for flexibility in function but is difficult to learn and use because it offers little or no support for common tasks. The phrase was coined in 1982 by Alan Perlis in the Epigrams on Programming:

54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.

In any Turing complete language, it is possible to write any computer program, so in a very rigorous sense nearly all programming languages are equally capable. Showing that theoretical ability is not the same as usefulness in practice, Turing tarpits are characterized by having a simple abstract machine that requires the user to deal with many details in the solution of a problem. At the extreme opposite are interfaces that can perform very complex tasks with little human intervention but become obsolete if requirements change slightly.

Some esoteric programming languages, such as Brainfuck, are specifically referred to as "Turing tarpits" because they deliberately implement the minimum functionality necessary to be classified as Turing complete languages. Using such languages is a form of mathematical recreation: programmers can work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language.

Discussed on

๐Ÿ”— Crypto-Anarchism

๐Ÿ”— Mass surveillance ๐Ÿ”— Computing ๐Ÿ”— Internet culture ๐Ÿ”— Philosophy ๐Ÿ”— Cryptography ๐Ÿ”— Cryptography/Computer science ๐Ÿ”— Numismatics ๐Ÿ”— Sociology ๐Ÿ”— Numismatics/Cryptocurrency ๐Ÿ”— Computing/Computer Security ๐Ÿ”— Philosophy/Anarchism ๐Ÿ”— Anarchism

Crypto-anarchism (or crypto-anarchy) is a political ideology focusing on protection of privacy, political freedom and economic freedom, the adherents of which use cryptographic software for confidentiality and security while sending and receiving information over computer networks.

By using cryptographic software, the association between the identity of a certain user or organization and the pseudonym they use is made difficult to find, unless the user reveals the association. It is difficult to say which country's laws will be ignored, as even the location of a certain participant is unknown. However, participants may in theory voluntarily create new laws using smart contracts or, if the user is pseudonymous, depend on online reputation.

Discussed on

๐Ÿ”— History of the Monte Carlo method

๐Ÿ”— Computing ๐Ÿ”— Computer science ๐Ÿ”— Mathematics ๐Ÿ”— Physics ๐Ÿ”— Statistics

Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.

In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKeanโ€“Vlasov processes, kinetic models of gases).

Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carloโ€“based predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.

In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.

In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKeanโ€“Vlasov processes, nonlinear filtering equation). In other instances we are given a flow of probability distributions with an increasing level of sampling complexity (path spaces models with an increasing time horizon, Boltzmannโ€“Gibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.

Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.

Discussed on

๐Ÿ”— โ€ซโ€ฌโ€โ€ฎtxet lanoitcerid-iB

๐Ÿ”— Computing ๐Ÿ”— Computing/Software ๐Ÿ”— Writing systems ๐Ÿ”— Typography

A bidirectional text contains both text directionalities, right-to-left (RTL or dextrosinistral) and left-to-right (LTR or sinistrodextral). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row.

Some writing systems including the Arabic and Hebrew scripts or derived systems such as the Persian, Urdu, and Yiddish scripts, are written in a form known as right-to-left (RTL), in which writing begins at the right-hand side of a page and concludes at the left-hand side. This is different from the left-to-right (LTR) direction used by the dominant Latin script. When LTR text is mixed with RTL in the same paragraph, each type of text is written in its own direction, which is known as bidirectional text. This can get rather complex when multiple levels of quotation are used.

Many computer programs fail to display bidirectional text correctly. For example, the Hebrew name Sarah (ืฉืจื”) is spelled: sin (ืฉ) (which appears rightmost), then resh (ืจ), and finally heh (ื”) (which should appear leftmost).

Note: Some web browsers may display the Hebrew text in this article in the opposite direction.

Discussed on

๐Ÿ”— Reservoir computing

๐Ÿ”— Computing

Reservoir computing is a framework for computation that may be viewed as an extension of neural networks. Typically an input signal is fed into a fixed (random) dynamical system called a reservoir and the dynamics of the reservoir map the input to a higher dimension. Then a simple readout mechanism is trained to read the state of the reservoir and map it to the desired output. The main benefit is that training is performed only at the readout stage and the reservoir is fixed. Liquid-state machines and echo state networks are two major types of reservoir computing. One important feature of this system is that it can use the computational power of naturally available systems which is different from the neural networks and it reduces the computational cost.

Discussed on

๐Ÿ”— Wirth's Law

๐Ÿ”— Computing ๐Ÿ”— Computing/Software ๐Ÿ”— Computing/Computer science

Wirth's law is an adage on computer performance which states that software is getting slower more rapidly than hardware is becoming faster.

The adage is named after Niklaus Wirth, who discussed it in his 1995 article "A Plea for Lean Software".

Discussed on

๐Ÿ”— XML Appliance

๐Ÿ”— Computing ๐Ÿ”— Computing/Computer hardware

An XML appliance is a special-purpose network device used to secure, manage and mediate XML traffic. They are most popularly implemented in service-oriented architectures (SOA) to control XML-based web services traffic, and increasingly in cloud-oriented computing to help enterprises integrate on premises applications with off-premises cloud-hosted applications. XML appliances are also commonly referred to as SOA appliances, SOA gateways, XML gateways, and cloud brokers. Some have also been deployed for more specific applications like Message-oriented middleware. While the originators of the product category deployed exclusively as hardware, today most XML appliances are also available as software gateways and virtual appliances for environments like VMWare.

Discussed on