Topic: Computer science (Page 11)

You are looking at all articles with the topic "Computer science". We found 123 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Human-based computation games

πŸ”— Video games πŸ”— Computer science πŸ”— Science

A human-based computation game or game with a purpose (GWAP) is a human-based computation technique of outsourcing steps within a computational process to humans in an entertaining way (gamification).

Luis von Ahn first proposed the idea of "human algorithm games", or games with a purpose (GWAPs), in order to harness human time and energy for addressing problems that computers cannot yet tackle on their own. He believes that human intellect is an important resource and contribution to the enhancement of computer processing and human computer interaction. He argues that games constitute a general mechanism for using brainpower to solve open computational problems. In this technique, human brains are compared to processors in a distributed system, each performing a small task of a massive computation. However, humans require an incentive to become part of a collective computation. Online games are used as a means to encourage participation in the process.

The tasks presented in these games are usually trivial for humans, but difficult for computers. These tasks include labeling images, transcribing ancient texts, common sense or human experience based activities, and more. Human-based computation games motivate people through entertainment rather than an interest in solving computation problems. This makes GWAPs more appealing to a larger audience. GWAPs can be used to help build the semantic web, annotate and classify collected data, crowdsource general knowledge, and improving other general computer processes. GWAPs have a vast range of applications in variety of areas such as security, computer vision, Internet accessibility, adult content filtering, and Internet search. In applications such as these, games with a purpose have lowered the cost of annotating data and increased the level of human participation.

Discussed on

πŸ”— Andreas Raab passed away

πŸ”— Computer science

The Squeak programming language is a dialect of Smalltalk. It is object-oriented, class-based, and reflective.

It was derived directly from Smalltalk-80 by a group at Apple Computer that included some of the original Smalltalk-80 developers. Its development was continued by the same group at Walt Disney Imagineering, where it was intended for use in internal Disney projects. Later on the group moved on to be supported by HP labs, SAP Labs and most recently Y Combinator.

Squeak is cross-platform. Programs produced on one platform run bit-identical on all other platforms, and versions are available for many platforms including the obvious Windows/macOS/linux versions. The Squeak system includes code for generating a new version of the virtual machine (VM) on which it runs. It also includes a VM simulator written in Squeak. For these reasons, it is easily ported.

Discussed on

πŸ”— Pareto Efficiency

πŸ”— Computer science πŸ”— Economics πŸ”— Engineering πŸ”— Gender Studies

Pareto efficiency or Pareto optimality is a situation where no action or allocation is available that makes one individual better off without making another worse off. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:

  • Given an initial situation, a Pareto improvement is a new situation where some agents will gain, and no agents will lose.
  • A situation is called Pareto-dominated or Pareto-inefficient if there is some possible Pareto improvement that has not been made.
  • A situation is called Pareto-optimal or Pareto-efficient if no change could lead to improved satisfaction for some agent without some other agent losing or, equivalently, if there is no scope for further Pareto improvement (in other words, the situation is not Pareto-dominated).

The Pareto front (also called Pareto frontier or Pareto set) is the set of all Pareto-efficient situations.

Pareto originally used the word "optimal" for the concept, but as it describes a situation where a limited number of people will be made better off under finite resources, and it does not take equality or social well-being into account, it is in effect a definition of and better captured by "efficiency".

In addition to the context of efficiency in allocation, the concept of Pareto efficiency also arises in the context of efficiency in production vs. x-inefficiency: a set of outputs of goods is Pareto-efficient if there is no feasible re-allocation of productive inputs such that output of one product increases while the outputs of all other goods either increase or remain the same.

Pareto efficiency is measured along the production possibility frontier (PPF), which is a graphical representation of all the possible options of output for two products that can be produced using all factors of production.

Besides economics, the notion of Pareto efficiency has been applied to the selection of alternatives in engineering and biology. Each option is first assessed, under multiple criteria, and then a subset of options is ostensibly identified with the property that no other option can categorically outperform the specified option. It is a statement of impossibility of improving one variable without harming other variables in the subject of multi-objective optimization (also termed Pareto optimization).

Discussed on

πŸ”— Ron Graham has left us

πŸ”— Biography πŸ”— Computer science πŸ”— Mathematics

Ronald Lewis Graham (born October 31, 1935) is an American mathematician credited by the American Mathematical Society as being "one of the principal architects of the rapid development worldwide of discrete mathematics in recent years". He has done important work in scheduling theory, computational geometry, Ramsey theory, and quasi-randomness.

He is the Chief Scientist at the California Institute for Telecommunications and Information Technology (also known as Cal-(IT)2) and the Irwin and Joan Jacobs Professor in Computer Science and Engineering at the University of California, San Diego (UCSD).

Discussed on

πŸ”— Pi Calculus

πŸ”— Computing πŸ”— Computer science

In theoretical computer science, the Ο€-calculus (or pi-calculus) is a process calculus. The Ο€-calculus allows channel names to be communicated along the channels themselves, and in this way it is able to describe concurrent computations whose network configuration may change during the computation.

The Ο€-calculus is simple, it has few terms and so is a small, yet expressive language (see #Syntax). Functional programs can be encoded into the Ο€-calculus, and the encoding emphasises the dialogue nature of computation, drawing connections with game semantics. Extensions of the Ο€-calculus, such as the spi calculus and applied Ο€, have been successful in reasoning about cryptographic protocols. Beside the original use in describing concurrent systems, the Ο€-calculus has also been used to reason about business processes and molecular biology.

Discussed on

πŸ”— Succinct Data Structures

πŸ”— Computer science

In computer science, a succinct data structure is a data structure which uses an amount of space that is "close" to the information-theoretic lower bound, but (unlike other compressed representations) still allows for efficient query operations. The concept was originally introduced by Jacobson to encode bit vectors, (unlabeled) trees, and planar graphs. Unlike general lossless data compression algorithms, succinct data structures retain the ability to use them in-place, without decompressing them first. A related notion is that of a compressed data structure, in which the size of the data structure depends upon the particular data being represented.

Suppose that Z {\displaystyle Z} is the information-theoretical optimal number of bits needed to store some data. A representation of this data is called:

  • implicit if it takes Z + O ( 1 ) {\displaystyle Z+O(1)} bits of space,
  • succinct if it takes Z + o ( Z ) {\displaystyle Z+o(Z)} bits of space, and
  • compact if it takes O ( Z ) {\displaystyle O(Z)} bits of space.

For example, a data structure that uses 2 Z {\displaystyle 2Z} bits of storage is compact, Z + Z {\displaystyle Z+{\sqrt {Z}}} bits is succinct, Z + lg ⁑ Z {\displaystyle Z+\lg Z} bits is also succinct, and Z + 3 {\displaystyle Z+3} bits is implicit.

Implicit structures are thus usually reduced to storing information using some permutation of the input data; the most well-known example of this is the heap.

πŸ”— Expert System

πŸ”— Computer science πŸ”— Systems πŸ”— Human–Computer Interaction

In artificial intelligence, an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

πŸ”— Pareto Front

πŸ”— Computer science πŸ”— Economics πŸ”— Engineering

In multi-objective optimization, the Pareto front (also called Pareto frontier or Pareto curve) is the set of all Pareto efficient solutions. The concept is widely used in engineering.:β€Š111–148β€Š It allows the designer to restrict attention to the set of efficient choices, and to make tradeoffs within this set, rather than considering the full range of every parameter.:β€Š63–65β€Š:β€Š399–412β€Š

πŸ”— Computer

πŸ”— Technology πŸ”— Video games πŸ”— Computing πŸ”— Computer science πŸ”— Computing/Computer hardware πŸ”— Systems πŸ”— Computing/Software πŸ”— Engineering πŸ”— Home Living

A computer is a machine that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of operations, called programs. These programs enable computers to perform an extremely wide range of tasks. A "complete" computer including the hardware, the operating system (main software), and peripheral equipment required and used for "full" operation can be referred to as a computer system. This term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster.

Computers are used as control systems for a wide variety of industrial and consumer devices. This includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, and also general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.

Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century. The first digital electronic calculating machines were developed during World War II. The first semiconductor transistors in the late 1940s were followed by the silicon-based MOSFET (MOS transistor) and monolithic integrated circuit (IC) chip technologies in the late 1950s, leading to the microprocessor and the microcomputer revolution in the 1970s. The speed, power and versatility of computers have been increasing dramatically ever since then, with MOS transistor counts increasing at a rapid pace (as predicted by Moore's law), leading to the Digital Revolution during the late 20th to early 21st centuries.

Conventionally, a modern computer consists of at least one processing element, typically a central processing unit (CPU) in the form of a metal-oxide-semiconductor (MOS) microprocessor, along with some type of computer memory, typically MOS semiconductor memory chips. The processing element carries out arithmetic and logical operations, and a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices (keyboards, mice, joystick, etc.), output devices (monitor screens, printers, etc.), and input/output devices that perform both functions (e.g., the 2000s-era touchscreen). Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.

πŸ”— Floyd-Steinberg Dithering Algorithm

πŸ”— Computer science

Floyd–Steinberg dithering is an image dithering algorithm first published in 1976 by Robert W. Floyd and Louis Steinberg. It is commonly used by image manipulation software, for example when an image is converted into GIF format that is restricted to a maximum of 256 colors.

The algorithm achieves dithering using error diffusion, meaning it pushes (adds) the residual quantization error of a pixel onto its neighboring pixels, to be dealt with later. It spreads the debt out according to the distribution (shown as a map of the neighboring pixels):

[ βˆ— 7 16 … … 3 16 5 16 1 16 … ] {\displaystyle {\begin{bmatrix}&&*&{\frac {\displaystyle 7}{\displaystyle 16}}&\ldots \\\ldots &{\frac {\displaystyle 3}{\displaystyle 16}}&{\frac {\displaystyle 5}{\displaystyle 16}}&{\frac {\displaystyle 1}{\displaystyle 16}}&\ldots \\\end{bmatrix}}}

The pixel indicated with a star (*) indicates the pixel currently being scanned, and the blank pixels are the previously-scanned pixels. The algorithm scans the image from left to right, top to bottom, quantizing pixel values one by one. Each time the quantization error is transferred to the neighboring pixels, while not affecting the pixels that already have been quantized. Hence, if a number of pixels have been rounded downwards, it becomes more likely that the next pixel is rounded upwards, such that on average, the quantization error is close to zero.

The diffusion coefficients have the property that if the original pixel values are exactly halfway in between the nearest available colors, the dithered result is a checkerboard pattern. For example, 50% grey data could be dithered as a black-and-white checkerboard pattern. For optimal dithering, the counting of quantization errors should be in sufficient accuracy to prevent rounding errors from affecting the result.

In some implementations, the horizontal direction of scan alternates between lines; this is called "serpentine scanning" or boustrophedon transform dithering.

In the following pseudocode we can see the algorithm described above. This works for any approximately linear encoding of pixel values, such as 8-bit integers, 16-bit integers or real numbers in the range [0,1].

for each y from top to bottom do
    for each x from left to right do
        oldpixelΒ := pixel[x][y]
        newpixelΒ := find_closest_palette_color(oldpixel)
        pixel[x][y]Β := newpixel
        quant_errorΒ := oldpixel - newpixel
        pixel[x + 1][y    ]Β := pixel[x + 1][y    ] + quant_error Γ— 7 / 16
        pixel[x - 1][y + 1]Β := pixel[x - 1][y + 1] + quant_error Γ— 3 / 16
        pixel[x    ][y + 1]Β := pixel[x    ][y + 1] + quant_error Γ— 5 / 16
        pixel[x + 1][y + 1]Β := pixel[x + 1][y + 1] + quant_error Γ— 1 / 16

When converting 16 bit greyscale to 8 bit, find_closest_palette_color() may perform just a simple rounding, for example:

find_closest_palette_color(oldpixel) = round(oldpixel / 256)

The pseudocode can result in pixel values exceeding the valid values (such as greater than 1 in a [0,1] representation). Such values should ideally be clipped by the find_closest_palette_color() function, rather than clipping the intermediate values, since a subsequent error may bring the value back into range. However, if fixed-width integers are used, wrapping of intermediate values would cause inversion of black and white, and so should be avoided.

Discussed on