Topic: Computing (Page 7)

You are looking at all articles with the topic "Computing". We found 481 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Motorola 6800

πŸ”— Computing πŸ”— Computing/Computer hardware

The 6800 ("sixty-eight hundred") is an 8-bit microprocessor designed and first manufactured by Motorola in 1974. The MC6800 microprocessor was part of the M6800 Microcomputer System that also included serial and parallel interface ICs, RAM, ROM and other support chips. A significant design feature was that the M6800 family of ICs required only a single five-volt power supply at a time when most other microprocessors required three voltages. The M6800 Microcomputer System was announced in March 1974 and was in full production by the end of that year.

The 6800 has a 16-bit address bus that can directly access 64Β KB of memory and an 8-bit bi-directional data bus. It has 72 instructions with seven addressing modes for a total of 197 opcodes. The original MC6800 could have a clock frequency of up to 1Β MHz. Later versions had a maximum clock frequency of 2Β MHz.

In addition to the ICs, Motorola also provided a complete assembly language development system. The customer could use the software on a remote timeshare computer or on an in-house minicomputer system. The Motorola EXORciser was a desktop computer built with the M6800 ICs that could be used for prototyping and debugging new designs. An expansive documentation package included datasheets on all ICs, two assembly language programming manuals, and a 700-page application manual that showed how to design a point-of-sale terminal (a computerized cash register) around the 6800.

The 6800 was popular in computer peripherals, test equipment applications and point-of-sale terminals. It also found use in arcade games and pinball machines. The MC6802, introduced in 1977, included 128 bytes of RAM and an internal clock oscillator on chip. The MC6801 and MC6805 included RAM, ROM and I/O on a single chip and were popular in automotive applications. The Motorola 6809 was an updated compatible design.

Discussed on

πŸ”— Mach kernel

πŸ”— Apple Inc./Macintosh πŸ”— Apple Inc. πŸ”— Computing πŸ”— Apple Inc./iOS

Mach () is a kernel developed at Carnegie Mellon University to support operating system research, primarily distributed and parallel computing. Mach is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach's derivatives are the basis of the operating system kernel in GNU Hurd and of Apple's XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.

The project at Carnegie Mellon ran from 1985 to 1994, ending with Mach 3.0, which is a true microkernel. Mach was developed as a replacement for the kernel in the BSD version of Unix, so no new operating system would have to be designed around it. Mach and its derivatives exist within a number of commercial operating systems. These include all using the XNU operating system kernel which incorporates an earlier non-microkernel Mach as a major component. The Mach virtual memory management system was also adopted in 4.4BSD by the BSD developers at CSRG, and appears in modern BSD-derived Unix systems, such as FreeBSD.

Mach is the logical successor to Carnegie Mellon's Accent kernel. The lead developer on the Mach project, Richard Rashid, has been working at Microsoft since 1991 in various top-level positions revolving around the Microsoft Research division. Another of the original Mach developers, Avie Tevanian, was formerly head of software at NeXT, then Chief Software Technology Officer at Apple Inc. until March 2006.

Discussed on

πŸ”— Google Sidewiki

πŸ”— Computing πŸ”— Google

Google Sidewiki was a web annotation tool from Google, launched in September 2009 and discontinued in December 2011. Sidewiki was a browser extension that allowed anyone logged into a Google Account to make and view comments about a given website in a sidebar. Despite the name, the tool was not a collaborative wiki, though the comments were editable by the author.

Discussed on

πŸ”— Reversible computing

πŸ”— Technology πŸ”— Computing πŸ”— Computer science

Reversible computing is a model of computing where the computational process to some extent is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from (nonzero-probability) states to their successors must be one-to-one. Reversible computing is a form of unconventional computing.

Discussed on

πŸ”— RTX2010 radiation-hardened microprocessor

πŸ”— Computing πŸ”— Computing/Computer hardware

The RTX2010 manufactured by Intersil is a radiation hardened stack machine microprocessor which has been used in numerous spacecraft.

Discussed on

πŸ”— ZMODEM

πŸ”— Computing πŸ”— Telecommunications πŸ”— Computing/Software πŸ”— Computing/Networking

ZMODEM is a file transfer protocol developed by Chuck Forsberg in 1986, in a project funded by Telenet in order to improve file transfers on their X.25 network. In addition to dramatically improved performance compared to older protocols, ZMODEM also offered restartable transfers, auto-start by the sender, an expanded 32-bit CRC, and control character quoting supporting 8-bit clean transfers, allowing it to be used on networks that would not pass control characters.

In contrast to most transfer protocols developed for bulletin board systems (BBSs), ZMODEM was not directly based on, nor compatible with, the seminal XMODEM. Many variants of XMODEM had been developed in order to address one or more of its shortcomings, and most remained backward compatible and would successfully complete transfers with "classic" XMODEM implementations.

ZMODEM eschewed backward compatibility in favor of producing a radically improved protocol. It performed as well or better than any of the high-performance varieties of XMODEM, did so over links that previously didn't work at all, like X.25, or had poor performance, like Telebit modems, and included useful features found in few or no other protocols. ZMODEM became extremely popular on bulletin board systems (BBS) in the early 1990s, becoming a standard as widespread as XMODEM had been before it.

Discussed on

πŸ”— Asynchronous (Clockless) CPU

πŸ”— Computing πŸ”— Electronics πŸ”— Electrical engineering

An asynchronous circuit, or self-timed circuit, is a sequential digital logic circuit which is not governed by a clock circuit or global clock signal. Instead it often uses signals that indicate completion of instructions and operations, specified by simple data transfer protocols. This type of circuit is contrasted with synchronous circuits, in which changes to the signal values in the circuit are triggered by repetitive pulses called a clock signal. Most digital devices today use synchronous circuits. However asynchronous circuits have the potential to be faster, and may also have advantages in lower power consumption, lower electromagnetic interference, and better modularity in large systems. Asynchronous circuits are an active area of research in digital logic design.

Discussed on

πŸ”— Timsort: Fastest Sorting algorithm

πŸ”— Computing πŸ”— Computer science πŸ”— Computing/Software πŸ”— Computing/Computer science

Timsort is a hybrid stable sorting algorithm, derived from merge sort and insertion sort, designed to perform well on many kinds of real-world data. It was implemented by Tim Peters in 2002 for use in the Python programming language. The algorithm finds subsequences of the data that are already ordered (runs) and uses them to sort the remainder more efficiently. This is done by merging runs until certain criteria are fulfilled. Timsort has been Python's standard sorting algorithm since version 2.3. It is also used to sort arrays of non-primitive type in Java SE 7, on the Android platform, in GNU Octave, on V8, and Swift.

It uses techniques from Peter McIlroy's 1993 paper "Optimistic Sorting and Information Theoretic Complexity".

Discussed on

πŸ”— Transmeta Crusoe

πŸ”— Computing πŸ”— Computing/Computer hardware

The Crusoe is a family of x86-compatible microprocessors developed by Transmeta and introduced in 2000. Crusoe was notable for its method of achieving x86 compatibility. Instead of the instruction set architecture being implemented in hardware, or translated by specialized hardware, the Crusoe runs a software abstraction layer, or a virtual machine, known as the Code Morphing Software (CMS). The CMS translates machine code instructions received from programs into native instructions for the microprocessor. In this way, the Crusoe can emulate other instruction set architectures (ISAs).

This is used to allow the microprocessors to emulate the Intel x86 instruction set. In theory, it is possible for the CMS to be modified to emulate other ISAs. Transmeta demonstrated Crusoe executing Java bytecode by translating the bytecodes into instructions in its native instruction set. The addition of an abstraction layer between the x86 instruction stream and the hardware means that the hardware architecture can change without breaking compatibility, just by modifying the CMS. For example, Transmeta Efficeon β€” a second-generation Transmeta design β€” has a 256-bit-wide VLIW core versus the 128-bit core of the Crusoe.

Crusoe performs in software some of the functionality traditionally implemented in hardware (e.g. instruction re-ordering), resulting in simpler hardware with fewer transistors. The relative simplicity of the hardware means that Crusoe consumes less power (and therefore generates less heat) than other x86-compatible microprocessors running at the same frequency.

A 700Β MHz Crusoe ran x86 programs at the speed of a 500Β MHz Pentium III x86 processor, although the Crusoe processor was smaller and cheaper than the corresponding Intel processor.

Discussed on

πŸ”— Fourth-Generation Programming Language

πŸ”— Technology πŸ”— Computing πŸ”— Computer science πŸ”— Systems πŸ”— Business πŸ”— Computing/Software πŸ”— Systems/Software engineering

A fourth-generation programming language (4GL) is any computer programming language that belongs to a class of languages envisioned as an advancement upon third-generation programming languages (3GL). Each of the programming language generations aims to provide a higher level of abstraction of the internal computer hardware details, making the language more programmer-friendly, powerful, and versatile. While the definition of 4GL has changed over time, it can be typified by operating more with large collections of information at once rather than focusing on just bits and bytes. Languages claimed to be 4GL may include support for database management, report generation, mathematical optimization, GUI development, or web development. Some researchers state that 4GLs are a subset of domain-specific languages.

The concept of 4GL was developed from the 1970s through the 1990s, overlapping most of the development of 3GL, with 4GLs identified as "non-procedural" or "program-generating" languages, contrasted with 3GLs being algorithmic or procedural languages. While 3GLs like C, C++, C#, Java, and JavaScript remain popular for a wide variety of uses, 4GLs as originally defined found uses focused on databases, reports, and websites. Some advanced 3GLs like Python, Ruby, and Perl combine some 4GL abilities within a general-purpose 3GL environment, and libraries with 4GL-like features have been developed as add-ons for most popular 3GLs, producing languages that are a mix of 3GL and 4GL, blurring the distinction.

In the 1980s and 1990s, there were efforts to develop fifth-generation programming languages (5GL).

Discussed on