Random Articles (Page 6)
Have a deep view into what people are curious about.
🔗 Backbone Cabal
The backbone cabal was an informal organization of large-site news server administrators of the worldwide distributed newsgroup-based discussion system Usenet. It existed from about 1983 at least into the 2000s.
The cabal was created in an effort to facilitate reliable propagation of new Usenet posts. While in the 1970s and 1980s many news servers only operated during night time to save on the cost of long distance communication, servers of the backbone cabal were available 24 hours a day. The administrators of these servers gained sufficient influence in the otherwise anarchic Usenet community to be able to push through controversial changes, for instance the Great Renaming of Usenet newsgroups during 1987.
Discussed on
- "Backbone Cabal" | 2020-08-22 | 37 Upvotes 8 Comments
🔗 Kernel Embedding of Distributions
In machine learning, the kernel embedding of distributions (also called the kernel mean or mean map) comprises a class of nonparametric methods in which a probability distribution is represented as an element of a reproducing kernel Hilbert space (RKHS). A generalization of the individual data-point feature mapping done in classical kernel methods, the embedding of distributions into infinite-dimensional feature spaces can preserve all of the statistical features of arbitrary distributions, while allowing one to compare and manipulate distributions using Hilbert space operations such as inner products, distances, projections, linear transformations, and spectral analysis. This learning framework is very general and can be applied to distributions over any space on which a sensible kernel function (measuring similarity between elements of ) may be defined. For example, various kernels have been proposed for learning from data which are: vectors in , discrete classes/categories, strings, graphs/networks, images, time series, manifolds, dynamical systems, and other structured objects. The theory behind kernel embeddings of distributions has been primarily developed by Alex Smola, Le Song , Arthur Gretton, and Bernhard Schölkopf. A review of recent works on kernel embedding of distributions can be found in.
The analysis of distributions is fundamental in machine learning and statistics, and many algorithms in these fields rely on information theoretic approaches such as entropy, mutual information, or Kullback–Leibler divergence. However, to estimate these quantities, one must first either perform density estimation, or employ sophisticated space-partitioning/bias-correction strategies which are typically infeasible for high-dimensional data. Commonly, methods for modeling complex distributions rely on parametric assumptions that may be unfounded or computationally challenging (e.g. Gaussian mixture models), while nonparametric methods like kernel density estimation (Note: the smoothing kernels in this context have a different interpretation than the kernels discussed here) or characteristic function representation (via the Fourier transform of the distribution) break down in high-dimensional settings.
Methods based on the kernel embedding of distributions sidestep these problems and also possess the following advantages:
- Data may be modeled without restrictive assumptions about the form of the distributions and relationships between variables
- Intermediate density estimation is not needed
- Practitioners may specify the properties of a distribution most relevant for their problem (incorporating prior knowledge via choice of the kernel)
- If a characteristic kernel is used, then the embedding can uniquely preserve all information about a distribution, while thanks to the kernel trick, computations on the potentially infinite-dimensional RKHS can be implemented in practice as simple Gram matrix operations
- Dimensionality-independent rates of convergence for the empirical kernel mean (estimated using samples from the distribution) to the kernel embedding of the true underlying distribution can be proven.
- Learning algorithms based on this framework exhibit good generalization ability and finite sample convergence, while often being simpler and more effective than information theoretic methods
Thus, learning via the kernel embedding of distributions offers a principled drop-in replacement for information theoretic approaches and is a framework which not only subsumes many popular methods in machine learning and statistics as special cases, but also can lead to entirely new learning algorithms.
Discussed on
- "Kernel Embedding of Distributions" | 2014-02-15 | 13 Upvotes 3 Comments
🔗 The Article in the Most Languages
Discussed on
- "The Article in the Most Languages" | 2025-08-09 | 241 Upvotes 77 Comments
🔗 Assume a can opener
"Assume a can opener" is a catchphrase used to mock economists and other theorists who base their conclusions on unjustified or oversimplified assumptions.
The phrase derives from a joke which dates to at least 1970 and possibly originated with British economists. The first book mentioning it is likely Economics as a Science (1970) by Kenneth E. Boulding:
There is a story that has been going around about a physicist, a chemist, and an economist who were stranded on a desert island with no implements and a can of food. The physicist and the chemist each devised an ingenious mechanism for getting the can open; the economist merely said, "Assume we have a can opener"!
The phrase was popularized in a 1981 book and has become sufficiently well known that many writers on economic topics use it as a catchphrase without further explanation.
Discussed on
- "Assume a Can Opener" | 2023-03-17 | 10 Upvotes 1 Comments
- "Assume a can opener" | 2018-03-29 | 26 Upvotes 6 Comments
🔗 Portable soup
Portable soup was a kind of dehydrated food used in the 18th and 19th centuries. It was a precursor of meat extract and bouillon cubes, and of industrially dehydrated food. It is also known as pocket soup or veal glew. It is a cousin of the glace de viande of French cooking. It was long a staple of seamen and explorers, for it would keep for many months or even years. In this context, it was a filling and nutritious dish. Portable soup of less extended vintage was, according to the 1881 Household Cyclopedia, "exceedingly convenient for private families, for by putting one of the cakes in a saucepan with about a quart of water, and a little salt, a basin of good broth may be made in a few minutes."
Discussed on
- "Portable soup" | 2016-02-09 | 17 Upvotes 1 Comments
🔗 MARS-500
The Mars-500 mission was a psychosocial isolation experiment conducted between 2007 and 2011 by Russia, the European Space Agency and China, in preparation for an unspecified future crewed spaceflight to the planet Mars. The experiment's facility was located at the Russian Academy of Sciences' Institute of Biomedical Problems (IBMP) in Moscow, Russia.
Between 2007 and 2011, three different crews of volunteers lived and worked in a mock-up spacecraft at IBMP. The final stage of the experiment, which was intended to simulate a 520-day crewed mission, was conducted by an all-male crew consisting of three Russians (Alexey Sitev, Sukhrob Kamolov, Alexander Smoleevskij), a Frenchman (Romain Charles), an Italian (Diego Urbina) and a Chinese citizen (Yue Wang). The mock-up facility simulated an Earth-Mars shuttle spacecraft, an ascent-descent craft, and the Martian surface. The volunteers who participated in the three stages included professionals with experience in engineering, medicine, biology, and human spaceflight. The experiment yielded important data on the physiological, social and psychological effects of long-term close-quarters isolation.
Discussed on
- "MARS-500" | 2015-08-29 | 49 Upvotes 14 Comments
🔗 Dadda Multiplier
The Dadda multiplier is a hardware binary multiplier design invented by computer scientist Luigi Dadda in 1965. It uses a selection of full and half adders to sum the partial products in stages (the Dadda tree or Dadda reduction) until two numbers are left. The design is similar to the Wallace multiplier, but the different reduction tree reduces the required number of gates (for all but the smallest operand sizes) and makes it slightly faster (for all operand sizes).
Dadda and Wallace multipliers have the same three steps for two bit strings and of lengths and respectively:
- Multiply (logical AND) each bit of , by each bit of , yielding results, grouped by weight in columns
- Reduce the number of partial products by stages of full and half adders until we are left with at most two bits of each weight.
- Add the final result with a conventional adder.
As with the Wallace multiplier, the multiplication products of the first step carry different weights reflecting the magnitude of the original bit values in the multiplication. For example, the product of bits has weight .
Unlike Wallace multipliers that reduce as much as possible on each layer, Dadda multipliers attempt to minimize the number of gates used, as well as input/output delay. Because of this, Dadda multipliers have a less expensive reduction phase, but the final numbers may be a few bits longer, thus requiring slightly bigger adders.
Discussed on
- "Dadda Multiplier" | 2022-10-02 | 37 Upvotes 7 Comments
🔗 Go Away Green
Go Away Green refers to a range of paint colors used in Disney Parks to divert attention away from infrastructure. It has been compared to military camouflage like Olive Drab.
Imagineer John Hench wrote about developing such colors, "We chose a neutral gray-brown for the railing, a 'go away' color that did not call attention to itself, even though it was entirely unrelated to the Colonial color scheme."
Large attraction buildings visible either inside or outside a park such as Soarin’ at California Adventure or Indiana Jones Adventure at Disneyland are often painted a muted green. Necessary in-park infrastructure like speakers, lamp posts, fences, trash cans, and the former entrance to Club 33 are also painted various shades of green.
This concept also extends to grays, browns, and blues for spaces with less greenery or buildings that extend above the tree line, such as Guardians of the Galaxy: Cosmic Rewind.
Discussed on
- "Go Away Green" | 2025-02-09 | 93 Upvotes 31 Comments
🔗 Norman-Arab-Byzantine Culture
The term Norman–Arab–Byzantine culture, Norman–Sicilian culture or, less inclusively, Norman–Arab culture, (sometimes referred to as the "Arab-Norman civilization") refers to the interaction of the Norman, Byzantine Greek, Latin, and Arab cultures following the Norman conquest of the former Emirate of Sicily and North Africa from 1061 to around 1250. The civilization resulted from numerous exchanges in the cultural and scientific fields, based on the tolerance shown by the Normans towards the Latin- and Greek-speaking Christian populations and the former Arab Muslim settlers. As a result, Sicily under the Normans became a crossroad for the interaction between the Norman and Latin Catholic, Byzantine–Orthodox, and Arab–Islamic cultures.
Discussed on
- "Norman-Arab-Byzantine Culture" | 2023-04-05 | 10 Upvotes 2 Comments