Topic: Computer science (Page 5)
You are looking at all articles with the topic "Computer science". We found 123 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
π John C. Reynolds, Eminent Programming Language Researcher, has Died
John Charles Reynolds (June 1, 1935 β April 28, 2013) was an American computer scientist.
Discussed on
- "John C. Reynolds, Eminent Programming Language Researcher, has Died " | 2013-04-29 | 136 Upvotes 15 Comments
π Sleeping barber problem
In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem between multiple operating system processes. The problem is analogous to that of keeping a barber working when there are customers, resting when there are none, and doing so in an orderly manner.
The Sleeping Barber Problem is often attributed to Edsger Dijkstra (1965), one of the pioneers in computer science.
Discussed on
- "Sleeping barber problem" | 2017-09-16 | 119 Upvotes 27 Comments
π Snobol (βStriNg Oriented and SymBOlic Languageβ)
SNOBOL ("StriNg Oriented and symBOlic Language") is a series of programming languages developed between 1962 and 1967 at AT&T Bell Laboratories by David J. Farber, Ralph E. Griswold and Ivan P. Polonsky, culminating in SNOBOL4. It was one of a number of text-string-oriented languages developed during the 1950s and 1960s; others included COMIT and TRAC.
SNOBOL4 stands apart from most programming languages of its era by having patterns as a first-class data type (i.e. a data type whose values can be manipulated in all ways permitted to any other data type in the programming language) and by providing operators for pattern concatenation and alternation. SNOBOL4 patterns are a type of object and admit various manipulations, much like later object-oriented languages such as JavaScript whose patterns are known as regular expressions. In addition SNOBOL4 strings generated during execution can be treated as programs and either interpreted or compiled and executed (as in the eval function of other languages).
SNOBOL4 was quite widely taught in larger U.S. universities in the late 1960s and early 1970s and was widely used in the 1970s and 1980s as a text manipulation language in the humanities.
In the 1980s and 1990s, its use faded as newer languages such as AWK and Perl made string manipulation by means of regular expressions fashionable. SNOBOL4 patterns subsume BNF grammars, which are equivalent to context-free grammars and more powerful than regular expressions. The "regular expressions" in current versions of AWK and Perl are in fact extensions of regular expressions in the traditional sense, but regular expressions, unlike SNOBOL4 patterns, are not recursive, which gives a distinct computational advantage to SNOBOL4 patterns. (Recursive expressions did appear in Perl 5.10, though, released in December 2007.)
The later SL5 (1977) and Icon (1978) languages were designed by Griswold to combine the backtracking of SNOBOL4 pattern matching with more standard ALGOL-like structuring.
Discussed on
- "Snobol (βStriNg Oriented and SymBOlic Languageβ)" | 2023-05-03 | 84 Upvotes 56 Comments
π Cyclomatic Complexity
Cyclomatic complexity is a software metric used to indicate the complexity of a program. It is a quantitative measure of the number of linearly independent paths through a program's source code. It was developed by Thomas J. McCabe, Sr. in 1976.
Cyclomatic complexity is computed using the control-flow graph of the program: the nodes of the graph correspond to indivisible groups of commands of a program, and a directed edge connects two nodes if the second command might be executed immediately after the first command. Cyclomatic complexity may also be applied to individual functions, modules, methods or classes within a program.
One testing strategy, called basis path testing by McCabe who first proposed it, is to test each linearly independent path through the program; in this case, the number of test cases will equal the cyclomatic complexity of the program.
Discussed on
- "Cyclomatic Complexity" | 2024-02-08 | 91 Upvotes 49 Comments
π Limits of computation
The limits of computation are governed by a number of different factors. In particular, there are several physical and practical limits to the amount of computation or data storage that can be performed with a given amount of mass, volume, or energy.
Discussed on
- "Limits of computation" | 2017-04-28 | 121 Upvotes 18 Comments
π Go! -- the non-Google programming language by Keith Clark and Francis McCabe.
Go! is an agent-based programming language in the tradition of logic-based programming languages like Prolog. It was introduced in a 2003 paper by Francis McCabe and Keith Clark.
Discussed on
- "Go! -- the non-Google programming language by Keith Clark and Francis McCabe." | 2009-11-11 | 82 Upvotes 49 Comments
π L-System
An L-system or Lindenmayer system is a parallel rewriting system and a type of formal grammar. An L-system consists of an alphabet of symbols that can be used to make strings, a collection of production rules that expand each symbol into some larger string of symbols, an initial "axiom" string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems were introduced and developed in 1968 by Aristid Lindenmayer, a Hungarian theoretical biologist and botanist at the University of Utrecht. Lindenmayer used L-systems to describe the behaviour of plant cells and to model the growth processes of plant development. L-systems have also been used to model the morphology of a variety of organisms and can be used to generate self-similar fractals.
Discussed on
- "L-System" | 2020-12-17 | 112 Upvotes 17 Comments
π Piet is a programming language, whose programs look like abstract art.
An esoteric programming language (sometimes shortened to esolang) is a programming language designed to test the boundaries of computer programming language design, as a proof of concept, as software art, as a hacking interface to another language (particularly functional programming or procedural programming languages), or as a joke. The use of esoteric distinguishes these languages from programming languages that working developers use to write software. Usually, an esolang's creators do not intend the language to be used for mainstream programming, although some esoteric features, such as visuospatial syntax, have inspired practical applications in the arts. Such languages are often popular among hackers and hobbyists.
Usability is rarely a goal for esoteric programming language designersβoften the design leads to quite the opposite. Their usual aim is to remove or replace conventional language features while still maintaining a language that is Turing-complete, or even one for which the computational class is unknown.
Discussed on
- "Piet is a programming language, whose programs look like abstract art." | 2010-03-04 | 99 Upvotes 28 Comments
π Turing Tarpit
A Turing tarpit (or Turing tar-pit) is any programming language or computer interface that allows for flexibility in function but is difficult to learn and use because it offers little or no support for common tasks. The phrase was coined in 1982 by Alan Perlis in the Epigrams on Programming:
54. Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
In any Turing complete language, it is possible to write any computer program, so in a very rigorous sense nearly all programming languages are equally capable. Showing that theoretical ability is not the same as usefulness in practice, Turing tarpits are characterized by having a simple abstract machine that requires the user to deal with many details in the solution of a problem. At the extreme opposite are interfaces that can perform very complex tasks with little human intervention but become obsolete if requirements change slightly.
Some esoteric programming languages, such as Brainfuck, are specifically referred to as "Turing tarpits" because they deliberately implement the minimum functionality necessary to be classified as Turing complete languages. Using such languages is a form of mathematical recreation: programmers can work out how to achieve basic programming constructs in an extremely difficult but mathematically Turing-equivalent language.
Discussed on
- "Turing Tarpit" | 2015-11-14 | 65 Upvotes 30 Comments
- "Turing tarpit" | 2013-05-18 | 23 Upvotes 4 Comments
π History of the Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKeanβVlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carloβbased predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKeanβVlasov processes, nonlinear filtering equation). In other instances we are given a flow of probability distributions with an increasing level of sampling complexity (path spaces models with an increasing time horizon, BoltzmannβGibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
Discussed on
- "History of the Monte Carlo method" | 2022-09-18 | 94 Upvotes 26 Comments