Topic: Computing (Page 26)

You are looking at all articles with the topic "Computing". We found 481 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

🔗 Minivac 601

🔗 Computing

Minivac 601 Digital Computer Kit was an electromechanical digital computer system created by information theory pioneer Claude Shannon as an educational toy using digital circuits.

Discussed on

🔗 The Magic SysRq key

🔗 Computing 🔗 Computing/Software 🔗 Linux

The magic SysRq key is a key combination understood by the Linux kernel, which allows the user to perform various low-level commands regardless of the system's state. It is often used to recover from freezes, or to reboot a computer without corrupting the filesystem. Its effect is similar to the computer's hardware reset button (or power switch) but with many more options and much more control.

This key combination provides access to powerful features for software development and disaster recovery. In this sense, it can be considered a form of escape sequence. Principal among the offered commands are means to forcibly unmount file systems, kill processes, recover keyboard state, and write unwritten data to disk. With respect to these tasks, this feature serves as a tool of last resort.

The magic SysRq key cannot work under certain conditions, such as a kernel panic or a hardware failure preventing the kernel from running properly.

Discussed on

🔗 Cell (microprocessor)

🔗 Computing 🔗 Computer science 🔗 Brands

Cell is a multi-core microprocessor microarchitecture that combines a general-purpose PowerPC core of modest performance with streamlined coprocessing elements which greatly accelerate multimedia and vector processing applications, as well as many other forms of dedicated computation.

It was developed by Sony, Toshiba, and IBM, an alliance known as "STI". The architectural design and first implementation were carried out at the STI Design Center in Austin, Texas over a four-year period beginning March 2001 on a budget reported by Sony as approaching US$400 million. Cell is shorthand for Cell Broadband Engine Architecture, commonly abbreviated CBEA in full or Cell BE in part.

The first major commercial application of Cell was in Sony's PlayStation 3 game console, released in 2006. In May 2008, the Cell-based IBM Roadrunner supercomputer became the first TOP500 LINPACK sustained 1.0 petaflops system. Mercury Computer Systems also developed designs based on the Cell.

The Cell architecture includes a memory coherence architecture that emphasizes power efficiency, prioritizes bandwidth over low latency, and favors peak computational throughput over simplicity of program code. For these reasons, Cell is widely regarded as a challenging environment for software development. IBM provides a Linux-based development platform to help developers program for Cell chips.

Discussed on

🔗 Intelink

🔗 Computing 🔗 Computing/Networking

Intelink is a group of secure intranets used by the United States Intelligence Community. The first Intelink network was established in 1994 to take advantage of Internet technologies (though not connected to the public Internet) and services to promote intelligence dissemination and business workflow. Since then it has become an essential capability for the US intelligence community and its partners to share information, collaborate across agencies, and conduct business. Intelink refers to the web environment on protected top secret, secret, and unclassified networks. One of the key features of Intelink is Intellipedia, an online system for collaborative data sharing based on MediaWiki. Intelink uses WordPress as the basis of its blogging service.

Discussed on

🔗 Five Minute Rule

🔗 Computing 🔗 Computer science

In computer science, the five-minute rule is a rule of thumb for deciding whether a data item should be kept in memory, or stored on disk and read back into memory when required. It was first formulated by Jim Gray and Gianfranco Putzolu in 1985, and then subsequently revised in 1997 and 2007 to reflect changes in the relative cost and performance of memory and persistent storage.

The rule is as follows:

The 5-minute random rule: cache randomly accessed disk pages that are re-used every 5 minutes or less.

Gray also issued a counterpart one-minute rule for sequential access:

The 1-minute rule: cache sequentially accessed disk pages that are re-used every 1 minute or less.

Although the 5-minute rule was invented in the realm of databases, it has also been applied elsewhere, for example, in Network File System cache capacity planning.

The original 5-minute rule was derived from the following cost-benefit computation:

BreakEvenIntervalinSeconds = (PagesPerMBofRAM / AccessesPerSecondPerDisk) × (PricePerDiskDrive / PricePerMBofRAM)

Applying it to 2007 data yields approximately a 90-minutes interval for magnetic-disk-to-DRAM caching, 15 minutes for SSD-to-DRAM caching and 2​14 hours for disk-to-SSD caching. The disk-to-DRAM interval was thus a bit short of what Gray and Putzolu anticipated in 1987 as the "five-hour rule" was going to be in 2007 for RAM and disks.

According to calculations by NetApp engineer David Dale as reported in The Register, the figures for disc-to-DRAM caching in 2008 were as follows: "The 50KB page break-even was five minutes, the 4KB one was one hour and the 1KB one was five hours. There needed to be a 50-fold increase in page size to cache for break-even at five minutes." Regarding disk-to-SSD caching in 2010, the same source reported that "A 250KB page break even with SLC was five minutes, but five hours with a 4KB page size. It was five minutes with a 625KB page size with MLC flash and 13 hours with a 4KB MLC page size."

In 2000, Gray and Shenoy applied a similar calculation for web page caching and concluded that a browser should "cache web pages if there is any chance they will be re-referenced within their lifetime."

Discussed on

🔗 Rules of Play

🔗 Video games 🔗 Computing 🔗 Books

Rules of Play: Game Design Fundamentals is a book on game design by Katie Salen and Eric Zimmerman, published by MIT Press.

Discussed on

🔗 Merkle Tree

🔗 Computing 🔗 Computing/Software 🔗 Computing/Computer science 🔗 Cryptography 🔗 Cryptography/Computer science

In cryptography and computer science, a hash tree or Merkle tree is a tree in which every leaf node is labelled with the cryptographic hash of a data block, and every non-leaf node is labelled with the cryptographic hash of the labels of its child nodes. Hash trees allow efficient and secure verification of the contents of large data structures. Hash trees are a generalization of hash lists and hash chains.

Demonstrating that a leaf node is a part of a given binary hash tree requires computing a number of hashes proportional to the logarithm of the number of leaf nodes of the tree; this contrasts with hash lists, where the number is proportional to the number of leaf nodes itself.

The concept of hash trees is named after Ralph Merkle, who patented it in 1979.

Discussed on

🔗 Daniel W. Dobberpuhl

🔗 Biography 🔗 Computing 🔗 Biography/science and academia

Daniel "Dan" William Dobberpuhl (March 25, 1945 – October 26, 2019) was an electrical engineer in the United States who led several teams of microprocessor designers.

Discussed on

🔗 Parkinson's Law of Triviality

🔗 Computing 🔗 Systems 🔗 Business 🔗 Engineering 🔗 Systems/Systems engineering

Parkinson's law of triviality is C. Northcote Parkinson's 1957 argument that members of an organization give disproportionate weight to trivial issues. Parkinson provides the example of a fictional committee whose job was to approve the plans for a nuclear power plant spending the majority of its time on discussions about relatively minor but easy-to-grasp issues, such as what materials to use for the staff bike shed, while neglecting the proposed design of the plant itself, which is far more important and a far more difficult and complex task.

The law has been applied to software development and other activities. The terms bicycle-shed effect, bike-shed effect, and bike-shedding were coined as metaphors to illuminate the law of triviality; it was popularised in the Berkeley Software Distribution community by the Danish software developer Poul-Henning Kamp in 1999 and has spread from there to the whole software industry.

Discussed on

🔗 Bus Factor

🔗 Computing 🔗 Business 🔗 Computing/Software

The bus factor is a measurement of the risk resulting from information and capabilities not being shared among team members, derived from the phrase "in case they get hit by a bus." It is also known as the bread truck scenario, lottery factor, truck factor, bus/truck number, or lorry factor.

The concept is similar to the much older idea of key person risk, but considers the consequences of losing key technical experts, versus financial or managerial executives (who are theoretically replaceable at an insurable cost). Personnel must be both key and irreplaceable to contribute to the bus factor; losing a replaceable or non-key person would not result in a bus-factor effect.

The term was first applied to software development, where a team member might create critical components by crafting code that performs well, but which also is unavailable to other team members, such as work that was undocumented, never shared, encrypted, obfuscated, unpublished, or otherwise incomprehensible to others. Thus a key component would be effectively lost as a direct consequence of the absence of that team member, making the member key. If this component was key to the project's advancement, the project would stall.

Discussed on