Topic: Computer science (Page 12)

You are looking at all articles with the topic "Computer science". We found 123 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Continuation-Passing Style

πŸ”— Computing πŸ”— Computer science

In functional programming, continuation-passing style (CPS) is a style of programming in which control is passed explicitly in the form of a continuation. This is contrasted with direct style, which is the usual style of programming. Gerald Jay Sussman and Guy L. Steele, Jr. coined the phrase in AI Memo 349 (1975), which sets out the first version of the Scheme programming language.John C. Reynolds gives a detailed account of the numerous discoveries of continuations.

A function written in continuation-passing style takes an extra argument: an explicit "continuation"; i.e., a function of one argument. When the CPS function has computed its result value, it "returns" it by calling the continuation function with this value as the argument. That means that when invoking a CPS function, the calling function is required to supply a procedure to be invoked with the subroutine's "return" value. Expressing code in this form makes a number of things explicit which are implicit in direct style. These include: procedure returns, which become apparent as calls to a continuation; intermediate values, which are all given names; order of argument evaluation, which is made explicit; and tail calls, which simply call a procedure with the same continuation, unmodified, that was passed to the caller.

Programs can be automatically transformed from direct style to CPS. Functional and logic compilers often use CPS as an intermediate representation where a compiler for an imperative or procedural programming language would use static single assignment form (SSA). SSA is formally equivalent to a subset of CPS (excluding non-local control flow, which does not occur when CPS is used as intermediate representation). Functional compilers can also use A-normal form (ANF) (but only for languages requiring eager evaluation), rather than with 'thunks' (described in the examples below) in CPS. CPS is used more frequently by compilers than by programmers as a local or global style.

Discussed on

πŸ”— Anti-pattern

πŸ”— Computer science

An anti-pattern is a common response to a recurring problem that is usually ineffective and risks being highly counterproductive. The term, coined in 1995 by Andrew Koenig, was inspired by a book, Design Patterns, which highlights a number of design patterns in software development that its authors considered to be highly reliable and effective.

The term was popularized three years later by the book AntiPatterns, which extended its use beyond the field of software design to refer informally to any commonly reinvented but bad solution to a problem. Examples include analysis paralysis, cargo cult programming, death march, groupthink and vendor lock-in.

Discussed on

πŸ”— Unum, a Better Number Format

πŸ”— Computing πŸ”— Computer science πŸ”— Computing/Software

Unums (universal numbers) are a family of number formats and arithmetic for implementing real numbers on a computer, proposed by John L. Gustafson in 2015. They are designed as an alternative to the ubiquitous IEEE 754 floating-point standard. The latest version is known as posits.

Discussed on

πŸ”— Maximum Sum Subarray Problem

πŸ”— Computer science

In computer science, the maximum sum subarray problem, also known as the maximum segment sum problem, is the task of finding a contiguous subarray with the largest sum, within a given one-dimensional array A[1...n] of numbers. Formally, the task is to find indices i {\displaystyle i} and j {\displaystyle j} with 1 ≀ i ≀ j ≀ n {\displaystyle 1\leq i\leq j\leq n} , such that the sum

βˆ‘ x = i j A [ x ] {\displaystyle \sum _{x=i}^{j}A[x]}

is as large as possible. (Some formulations of the problem also allow the empty subarray to be considered; by convention, the sum of all values of the empty subarray is zero.) Each number in the input array A could be positive, negative, or zero.

For example, for the array of values [βˆ’2, 1, βˆ’3, 4, βˆ’1, 2, 1, βˆ’5, 4], the contiguous subarray with the largest sum is [4, βˆ’1, 2, 1], with sum 6.

Some properties of this problem are:

  1. If the array contains all non-negative numbers, then the problem is trivial; a maximum subarray is the entire array.
  2. If the array contains all non-positive numbers, then a solution is any subarray of size 1 containing the maximal value of the array (or the empty subarray, if it is permitted).
  3. Several different sub-arrays may have the same maximum sum.

This problem can be solved using several different algorithmic techniques, including brute force, divide and conquer, dynamic programming, and reduction to shortest paths.

Discussed on

πŸ”— Cargo cult programming

πŸ”— Computing πŸ”— Computer science πŸ”— Computing/Software πŸ”— Computing/Computer science

Cargo cult programming is a style of computer programming characterized by the ritual inclusion of code or program structures that serve no real purpose. Cargo cult programming is symptomatic of a programmer not understanding either a bug they were attempting to solve or the apparent solution (compare shotgun debugging, deep magic). The term cargo cult programmer may apply when an unskilled or novice computer programmer (or one inexperienced with the problem at hand) copies some program code from one place to another with little understanding of how it works or whether it is required.

Cargo cult programming can also refer to the practice of applying a design pattern or coding style blindly without understanding the reasons behind that design principle. Examples being adding unnecessary comments to self-explanatory code, overzealous adherence to the conventions of a programming paradigm, or adding deletion code for objects that garbage collection automatically collect.

Obsessive and redundant checks for null values or testing whether a collection is empty before iterating its values may be a sign of cargo cult programming. Such obsessive checks make the code less readable, and often prevent the output of proper error messages, obscuring the real cause of a misbehaving program.

Discussed on

πŸ”— Ruby (Programming Language)

πŸ”— Computing πŸ”— Computer science πŸ”— Computing/Software πŸ”— Computing/Free and open-source software

Ruby is an interpreted, high-level, general-purpose programming language which supports multiple programming paradigms. It was designed with an emphasis on programming productivity and simplicity. In Ruby, everything is an object, including primitive data types. It was developed in the mid-1990s by Yukihiro "Matz" Matsumoto in Japan.

Ruby is dynamically typed and uses garbage collection and just-in-time compilation. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. According to the creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, BASIC, Java and Lisp.

Discussed on

πŸ”— Needleman-Wunsch Algorithm

πŸ”— Computer science πŸ”— Molecular and Cell Biology πŸ”— Computational Biology

The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score.

Discussed on

πŸ”— Pearson Hashing

πŸ”— Computer science

Pearson hashing is a hash function designed for fast execution on processors with 8-bit registers. Given an input consisting of any number of bytes, it produces as output a single byte that is strongly dependent on every byte of the input. Its implementation requires only a few instructions, plus a 256-byte lookup table containing a permutation of the values 0 through 255.

This hash function is a CBC-MAC that uses an 8-bit substitution cipher implemented via the substitution table. An 8-bit cipher has negligible cryptographic security, so the Pearson hash function is not cryptographically strong, but it is useful for implementing hash tables or as a data integrity check code, for which purposes it offers these benefits:

  • It is extremely simple.
  • It executes quickly on resource-limited processors.
  • There is no simple class of inputs for which collisions (identical outputs) are especially likely.
  • Given a small, privileged set of inputs (e.g., reserved words for a compiler), the permutation table can be adjusted so that those inputs yield distinct hash values, producing what is called a perfect hash function.
  • Two input strings differing by exactly one character never collide. E.g., applying the algorithm on the strings ABC and AEC will never produce the same value.

One of its drawbacks when compared with other hashing algorithms designed for 8-bit processors is the suggested 256 byte lookup table, which can be prohibitively large for a small microcontroller with a program memory size on the order of hundreds of bytes. A workaround to this is to use a simple permutation function instead of a table stored in program memory. However, using a too simple function, such as T[i] = 255-i, partly defeats the usability as a hash function as anagrams will result in the same hash value; using a too complex function, on the other hand, will affect speed negatively. Using a function rather than a table also allows extending the block size. Such functions naturally have to be bijective, like their table variants.

The algorithm can be described by the following pseudocode, which computes the hash of messageΒ C using the permutation tableΒ T:

algorithm pearson hashing is
    hΒ := 0

    for each c in C loop
        hΒ := T[ h xor c ]
    end loop

    return h

The hash variable (h) may be initialized differently, e.g. to the length of the data (C) modulo 256; this particular choice is used in the Python implementation example below.

πŸ”— Memetics

πŸ”— Computer science πŸ”— Philosophy πŸ”— Psychology πŸ”— Philosophy/Social and political philosophy πŸ”— Philosophy/Philosophy of science πŸ”— Philosophy/Contemporary philosophy πŸ”— Philosophy/Philosophy of mind πŸ”— Sociology πŸ”— Cultural Evolution

Memetics is a theory of the evolution of culture based on Darwinian principles with the meme as the unit of culture. The term "meme" was coined by biologist Richard Dawkins in his 1976 book The Selfish Gene, to illustrate the principle that he later called "Universal Darwinism". All evolutionary processes depend on information being copied, varied, and selected, a process also known as variation with selective retention. The information that is copied is called the replicator, and genes are the replicator for biological evolution. Dawkins proposed that the same process drives cultural evolution, and he called this second replicator the "meme". He gave as examples, tunes, catchphrases, fashions, and technologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on. Some succeed because they are valuable or useful to their human hosts while others are more like viruses.

Just as genes can work together to form co-adapted gene complexes, so groups of memes acting together form co-adapted meme complexes or memeplexes. Memeplexes include (among many other things) languages, traditions, scientific theories, financial institutions, and religions. Dawkins famously referred to religions as "viruses of the mind".

Among proponents of memetics are psychologist Susan Blackmore, author of The Meme Machine, who argues that when our ancestors began imitating behaviours, they let loose a second replicator and co-evolved to become the "meme machines" that copy, vary, and select memes in culture. Philosopher Daniel Dennett develops memetics extensively, notably in his books Darwin's Dangerous Idea, and From Bacteria to Bach and Back. He describes the units of memes as "the smallest elements that replicate themselves with reliability and fecundity." and claims that "Human consciousness is itself a huge complex of memes." In The Beginning of Infinity, physicist David Deutsch contrasts static societies that depend on anti-rational memes suppressing innovation and creativity, with dynamic societies based on rational memes that encourage enlightenment values, scientific curiosity, and progress.

Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random. Critics of memetics include biologist Stephen Jay Gould who calls memetics a "meaningless metaphor". Philosopher Dan Sperber argues against memetics as a viable approach to cultural evolution because cultural items are not directly copied or imitated but are reproduced. Anthropologist Robert Boyd and biologist Peter Richerson work within the alternative, and more mainstream, field of cultural evolution theory and gene-culture coevolution. Dual inheritance theory has much in common with memetics but rejects the idea that memes are replicators. From this perspective, memetics is seen as just one of several approaches to cultural evolution and one that is generally considered less useful than the alternatives of gene-culture coevolution or dual inheritance theory. The main difference is that dual inheritance theory ultimately depends on biological advantage to genes, whereas memetics treats memes as a second replicator in its own right. Memetics also extends to the analysis of Internet culture and Internet memes.

πŸ”— Jon Postel

πŸ”— Biography πŸ”— Internet πŸ”— Computing πŸ”— Computer science πŸ”— Biography/science and academia πŸ”— Computing/Computer science πŸ”— Computing/Early computers πŸ”— Computing/Networking

Jonathan Bruce Postel (; August 6, 1943 – October 16, 1998) was an American computer scientist who made many significant contributions to the development of the Internet, particularly with respect to standards. He is known principally for being the Editor of the Request for Comment (RFC) document series, for Simple Mail Transfer Protocol (SMTP), and for administering the Internet Assigned Numbers Authority (IANA) until his death. In his lifetime he was known as the "god of the Internet" for his comprehensive influence on the medium.

The Internet Society's Postel Award is named in his honor, as is the Postel Center at Information Sciences Institute, University of Southern California. His obituary was written by Vint Cerf and published as RFC 2468 in remembrance of Postel and his work. In 2012, Postel was inducted into the Internet Hall of Fame by the Internet Society. The Channel Islands' Domain Registry building was named after him in early 2016.

Discussed on