Popular Articles (Page 43)
Hint: You are looking at the most popular articles. If you are interested in popular topics instead, click here.
π Taikyoku Shogi
Taikyoku shΕgi (Japanese: 倧ε±ε°ζ£) lit. "ultimate chess" is the largest known variant of shogi (Japanese chess). The game was created around the mid-16th century (presumably by priests) and is based on earlier large board shogi games. Before the rediscovery of taikyoku shogi in 1997, tai shogi was believed to be the largest playable chess variant ever. It has not been shown that taikyoku shogi was ever widely played. There are only two sets of restored taikyoku shogi pieces and one of them is held at Osaka University of Commerce. One game may be played over several long sessions and require each player to make over a thousand moves.
Because the game was found only recently after centuries of obscurity, it is difficult to say exactly what all the rules were. Several documents describing the game have been found; however, there are differences between them. Many of the pieces appear in other shogi variants but their moves may be different. The board, and likewise the pieces, were made much smaller, making archeological finds difficult to decipher. Research into this game continues for historical and cultural reasons, but also to satisfy the curious and those who wish to play what could be the most challenging chess-like game ever made. More research must be done however. This article focuses on one likely set of rules that can make the game playable in modern times but is by no means canon. These rules may change as more discoveries are made and secrets of the game unlocked.
Further, because of the terse and often incomplete wording of the historical sources for the large shogi variants, except for chu shogi and to a lesser extent dai shogi (which were at some points of time the most prestigious forms of shogi being played), the historical rules of taikyoku shogi are not clear. Different sources often differ significantly in the moves attributed to the pieces, and the degree of contradiction (summarised below with the listing of most known alternative moves) is such that it is likely impossible to reconstruct the "true historical rules" with any degree of certainty, if there ever was such a thing. It is not clear if the game was ever played much historically, as there is no record of any sets having been made.
Discussed on
- "Taikyoku Shogi" | 2020-10-27 | 230 Upvotes 116 Comments
π Zenzizenzizenzic
Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x8), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde, a 16th-century Welsh writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike); he wrote that it "doeth represent the square of squares squaredly".
At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic, which is a German spelling of the medieval Italian word censo, meaning "squared". Since the square of a square of a number is its fourth power, Recorde used the word zenzizenzic (spelled by him as zenzizenzike) to express it. Some of the terms had prior use in Latin "zenzicubicus", "zensizensicus" and "zensizenzum". Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube, is found in Samuel Jeake's Logisticelogia. Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation,
Recorde proposed three mathematical terms by which any power (that is, index or exponent) greater than 1 could be expressed: zenzic, i.e. squared; cubic; and sursolid, i.e. raised to a prime number greater than three, the smallest of which is five. Sursolids were as follows: 5 was the first; 7, the second; 11, the third; 13, the fourth; etc.
Therefore, a number raised to the power of six would be zenzicubic, a number raised to the power of seven would be the second sursolid, hence bissursolid (not a multiple of two and three), a number raised to the twelfth power would be the "zenzizenzicubic" and a number raised to the power of ten would be the square of the (first) sursolid. The fourteenth power was the square of the second sursolid, and the twenty-second was the square of the third sursolid.
Curiously, Jeake's text appears to designate a written exponent of 0 as being equal to an "absolute number, as if it had no Mark", thus using the notation x0 to refer to x alone, while a written exponent of 1, in his text, denotes "the Root of any number", thus using the notation x1 to refer to what is now known to be x0.5.
The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary (OED) has only one citation for it. As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED.
Samuel Jeake the Younger gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick:
Discussed on
- "Zenzizenzizenzic" | 2016-07-21 | 258 Upvotes 89 Comments
π Need for Cognition
The need for cognition (NFC), in psychology, is a personality variable reflecting the extent to which individuals are inclined towards effortful cognitive activities.
Need for cognition has been variously defined as "a need to structure relevant situations in meaningful, integrated ways" and "a need to understand and make reasonable the experiential world". Higher NFC is associated with increased appreciation of debate, idea evaluation, and problem solving. Those with a high need for cognition may be inclined towards high elaboration. Those with a lower need for cognition may display opposite tendencies, and may process information more heuristically, often through low elaboration.
Need for cognition is closely related to the five factor model domain openness to experience, typical intellectual engagement, and epistemic curiosity (see below).
Discussed on
- "Need for Cognition" | 2023-03-02 | 238 Upvotes 108 Comments
π Kalman Filter
In statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, containing statistical noise and other inaccuracies, and produces estimates of unknown variables that tend to be more accurate than those based on a single measurement alone, by estimating a joint probability distribution over the variables for each timeframe. The filter is named after Rudolf E. KΓ‘lmΓ‘n, one of the primary developers of its theory.
The Kalman filter has numerous applications in technology. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and dynamically positioned ships. Furthermore, the Kalman filter is a widely applied concept in time series analysis used in fields such as signal processing and econometrics. Kalman filters also are one of the main topics in the field of robotic motion planning and control and can be used in trajectory optimization. The Kalman filter also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, use of the Kalman filter supports a realistic model for making estimates of the current state of the motor system and issuing updated commands.
The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables, along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. The algorithm is recursive. It can run in real time, using only the present input measurements and the previously calculated state and its uncertainty matrix; no additional past information is required.
Optimality of the Kalman filter assumes that the errors are Gaussian. In the words of Rudolf E. KΓ‘lmΓ‘n: "In summary, the following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Though regardless of Gaussianity, if the process and measurement covariances are known, the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense.
Extensions and generalizations to the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The underlying model is a hidden Markov model where the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Also, Kalman filter has been successfully used in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filter.
Discussed on
- "Kalman Filter" | 2021-03-05 | 252 Upvotes 94 Comments
π Postzegelcode
A postzegelcode is a hand-written method of franking in the Netherlands. It consists of a code containing nine numbers and letters that customers can purchase online from PostNL and write directly on their piece of mail within five days as proof-of-payment in place of a postage stamp.
For mail within the Netherlands the nine letters and numbers are written as a grid of 3x3. For international mail there is fourth additional row that contains P, N, L.
The system was started in 2013. Initially the postzegelcode was more expensive than a stamp because additional handling systems were required. Then for a while the postzegelcode was cheaper. Eventually the tariffs were set to the same price.
In December 2020, 590,000 people sent cards with postzegelcodes.
Discussed on
- "Postzegelcode" | 2024-06-30 | 241 Upvotes 105 Comments
π Peak car
Peak car (also peak car use or peak travel) is a hypothesis that motor vehicle distance traveled per capita, predominantly by private car, has peaked and will now fall in a sustained manner. The theory was developed as an alternative to the prevailing market saturation model, which suggested that car use would saturate and then remain reasonably constant, or to GDP-based theories which predict that traffic will increase again as the economy improves, linking recent traffic reductions to the Great Recession of 2008.
The theory was proposed following reductions, which have now been observed in Australia, Belgium, France, Germany, Iceland, Japan (early 1990s), New Zealand, Sweden, the United Kingdom (many cities from about 1994) and the United States. A study by Volpe Transportation in 2013 noted that average miles driven by individuals in the United States has been declining from 900 miles (1,400Β km) per month in 2004 to 820 miles (1,320Β km) in July 2012, and that the decline had continued since the recent upturn in the US economy.
A number of academics have written in support of the theory, including Phil Goodwin, formerly Director of the transport research groups at Oxford University and UCL, and David Metz, a former Chief Scientist of the UK Department of Transport. The theory is disputed by the UK Department for Transport, which predicts that road traffic in the United Kingdom will grow by 50% by 2036, and Professor Stephen Glaister, Director of the RAC Foundation, who say traffic will start increasing again as the economy improves. Unlike peak oil, a theory based on a reduction in the ability to extract oil due to resource depletion, peak car is attributed to more complex and less understood causes.
Discussed on
- "Peak car" | 2015-05-03 | 210 Upvotes 134 Comments
π Joseph Nacchio
Joseph P. Nacchio (born June 22, 1949 in Brooklyn, New York) is an American executive who was chairman of the board and chief executive officer of Qwest Communications International from 1997 to 2002. Nacchio was convicted of insider trading during his time heading Qwest. He claimed in court, with documentation, that his was the only company to demand legal authority for surreptitious mass surveillance demanded by the NSA which began prior to the 11 September 2001 attacks.
He was convicted of 19 counts of insider trading in Qwest stock on April 19, 2007 β charges his defense team claimed were U.S. government retaliation for his refusal to give customer data to the National Security Agency in February, 2001. This defense was not admissible in court because the U.S. Department of Justice filed an in limine motion, which is often used in national security cases, to exclude information which may reveal state secrets. Information from the Classified Information Procedures Act hearings in Nacchio's case was likewise ruled inadmissible.
On July 27, 2007, he was sentenced to six years in federal prison, and after appeals failed he reported to Federal Correctional Institution, Schuylkill in Schuylkill County, Pennsylvania on April 14, 2009 to serve his sentence. Nacchio finished serving his sentence on September 20, 2013.
Discussed on
- "Joseph Nacchio" | 2013-06-09 | 297 Upvotes 47 Comments
π Levenshtein Distance
In information theory, linguistics and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.
Levenshtein distance may also be referred to as edit distance, although that term may also denote a larger family of distance metrics known collectively as edit distance. It is closely related to pairwise string alignments.
Discussed on
- "Levenshtein Distance" | 2019-08-22 | 215 Upvotes 126 Comments
π How to get bias into a Wikipedia article
Tilt! How to get bias into a Wikipedia article
To all you budding propagandists in Wikiland: too many of you are working like a bunch of amateurs. Sorry to be so negative, but you have to understand that getting bias into the Wikipedia is a skill; it requires practice, finesse and imagination. It has to be learned; it is not a natural thing, though some have more talent for it than others.
I have been following the Middle East Wikipedia battleground for a few years now, and have been very impressed with the skill of some editors in introducing bias into articles. To ingenuous editors, some of these techniques may seem innocuous enough; in many cases, it is hard to see how proposed edits are biasing an article one way or another. It is, in fact, only in the last few months that I have been able to define what these techniques are and how they work to introduce bias.
The first thing you need to know as a budding propagandist is this: there are two levels at which bias is introduced into the Wikipedia: at the article level, and at the topic level. You need to set your sights high: you don't want to merely bias a single article, you want the entire Wikipedia on your side. Without understanding the importance of topic bias, it is hard to understand many of the article-level techniques, so I will start with the topic level.
Discussed on
- "How to get bias into a Wikipedia article" | 2013-09-30 | 230 Upvotes 111 Comments
π Sweden warrantlessly wiretaps all Internet traffic crossing its borders
The National Defence Radio Establishment (Swedish: FΓΆrsvarets radioanstalt, FRA) is a Swedish government agency organised under the Ministry of Defence. The two main tasks of FRA are signals intelligence (SIGINT), and support to government authorities and state-owned companies regarding computer security.
The FRA is not allowed to initialize any surveillance on their own, and operates purely on assignment from the Government, the Government Offices, the Armed Forces, the Swedish National Police Board and Swedish Security Service (SΓPO). Decisions and oversight regarding information interception is provided by the Defence Intelligence Court and the Defence Intelligence Commission; additional oversight regarding protection of privacy is provided by the Swedish Data Protection Authority.
Discussed on
- "Sweden warrantlessly wiretaps all Internet traffic crossing its borders" | 2013-06-10 | 279 Upvotes 59 Comments