Topic: Philosophy/Philosophy of mind
You are looking at all articles with the topic "Philosophy/Philosophy of mind". We found 15 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
π Moravec's Paradox
Moravec's paradox is the observation by artificial intelligence and robotics researchers that, contrary to traditional assumptions, reasoning (which is high-level in humans) requires very little computation, but sensorimotor skills (comparatively low-level in humans) require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility".
Similarly, Minsky emphasized that the most difficult human skills to reverse engineer are those that are unconscious. "In general, we're least aware of what our minds do best", he wrote, and added "we're more aware of simple processes that don't work well than of complex ones that work flawlessly".
Discussed on
- "Moravec's Paradox" | 2023-06-10 | 13 Upvotes 4 Comments
- "Moravec's Paradox" | 2019-08-15 | 155 Upvotes 87 Comments
- "Moravec's paradox" | 2018-04-21 | 30 Upvotes 6 Comments
- "Moravec's paradox" | 2016-02-06 | 30 Upvotes 4 Comments
- "Moravec's paradox" | 2012-12-14 | 188 Upvotes 43 Comments
π Alan Turing's 100th Birthday - Mathematician, logician, cryptanalyst, scientist
Alan Mathison Turing (; 23 June 1912Β β 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of theoretical computer science and artificial intelligence. Despite these accomplishments, he was not fully recognised in his home country during his lifetime, due to his homosexuality, and because much of his work was covered by the Official Secrets Act.
During the Second World War, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain's codebreaking centre that produced Ultra intelligence. For a time he led Hut 8, the section that was responsible for German naval cryptanalysis. Here, he devised a number of techniques for speeding the breaking of German ciphers, including improvements to the pre-war Polish bombe method, an electromechanical machine that could find settings for the Enigma machine.
Turing played a crucial role in cracking intercepted coded messages that enabled the Allies to defeat the Nazis in many crucial engagements, including the Battle of the Atlantic, and in so doing helped win the war. Due to the problems of counterfactual history, it is hard to estimate the precise effect Ultra intelligence had on the war, but at the upper end it has been estimated that this work shortened the war in Europe by more than two years and saved over 14Β million lives.
After the war Turing worked at the National Physical Laboratory, where he designed the Automatic Computing Engine. The Automatic Computing Engine was one of the first designs for a stored-program computer. In 1948 Turing joined Max Newman's Computing Machine Laboratory, at the Victoria University of Manchester, where he helped develop the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis and predicted oscillating chemical reactions such as the BelousovβZhabotinsky reaction, first observed in the 1960s.
Turing was prosecuted in 1952 for homosexual acts; the Labouchere Amendment of 1885 had mandated that "gross indecency" was a criminal offence in the UK. He accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954, 16 days before his 42nd birthday, from cyanide poisoning. An inquest determined his death as a suicide, but it has been noted that the known evidence is also consistent with accidental poisoning.
In 2009, following an Internet campaign, British Prime Minister Gordon Brown made an official public apology on behalf of the British government for "the appalling way he was treated". Queen Elizabeth II granted Turing a posthumous pardon in 2013. The Alan Turing law is now an informal term for a 2017 law in the United Kingdom that retroactively pardoned men cautioned or convicted under historical legislation that outlawed homosexual acts.
Discussed on
- "Alan Turing died 70 years ago" | 2024-06-07 | 103 Upvotes 136 Comments
- "Alan Turing's 100th Birthday - Mathematician, logician, cryptanalyst, scientist" | 2012-06-22 | 146 Upvotes 19 Comments
- "Happy Birthday, Alan Turing" | 2011-06-23 | 78 Upvotes 6 Comments
π Stochastic Parrot
In machine learning, "stochastic parrot" is a term coined by Emily M. Bender in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell. The term refers to "large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."
Discussed on
- "Stochastic Parrot" | 2023-06-13 | 125 Upvotes 161 Comments
π Ship of Theseus
In the metaphysics of identity, the ship of Theseus is a thought experiment that raises the question of whether an object that has had all of its components replaced remains fundamentally the same object. The concept is one of the oldest in Western philosophy, having been discussed by the likes of Heraclitus and Plato by ca. 500-400 BC.
Discussed on
- "Ship of Theseus" | 2022-12-10 | 28 Upvotes 13 Comments
- "Ship of Theseus" | 2015-08-19 | 46 Upvotes 50 Comments
π Bonini's Paradox
Bonini's paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems (such as the human brain).
Discussed on
- "Bonini's Paradox" | 2019-05-26 | 88 Upvotes 52 Comments
π The knowledge argument
The knowledge argument (also known as Mary's room or Mary the super-scientist) is a philosophical thought experiment proposed by Frank Jackson in his article "Epiphenomenal Qualia" (1982) and extended in "What Mary Didn't Know" (1986). The experiment is intended to argue against physicalismβthe view that the universe, including all that is mental, is entirely physical. The debate that emerged following its publication became the subject of an edited volumeβThere's Something About Mary (2004)βwhich includes replies from such philosophers as Daniel Dennett, David Lewis, and Paul Churchland.
Discussed on
- "The knowledge argument" | 2015-03-19 | 31 Upvotes 54 Comments
π Bicameralism (Psychology)
Bicameralism (the condition of being divided into "two-chambers") is a hypothesis in psychology that argues that the human mind once operated in a state in which cognitive functions were divided between one part of the brain which appears to be "speaking", and a second part which listens and obeysβa bicameral mind. The term was coined by Julian Jaynes, who presented the idea in his 1976 book The Origin of Consciousness in the Breakdown of the Bicameral Mind, wherein he made the case that a bicameral mentality was the normal and ubiquitous state of the human mind as recently as 3,000 years ago, near the end of the Mediterranean bronze age.
Discussed on
- "Bicameralism (Psychology)" | 2019-07-06 | 51 Upvotes 29 Comments
π The Hard Problem of Consciousness
The hard problem of consciousness is the problem of explaining why and how sentient organisms have qualia or phenomenal experiencesβhow and why it is that some internal states are subjective, felt states, such as heat or pain, rather than merely nonsubjective, unfelt states, as in a thermostat or a toaster. The philosopher David Chalmers, who introduced the term "hard problem" of consciousness, contrasts this with the "easy problems" of explaining the ability to discriminate, integrate information, report mental states, focus attention, and so forth. Easy problems are (relatively) easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, regardless of how complex or poorly understood the phenomena of the easy problems may be, they can eventually be understood by relying entirely on standard scientific methodologies. Chalmers claims that the problem of experience is distinct from this set and will "persist even when the performance of all the relevant functions is explained".
The existence of a "hard problem" is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, and Keith Frankish and cognitive neuroscientists such as Stanislas Dehaene and Bernard Baars.
Discussed on
- "The Hard Problem of Consciousness" | 2010-05-03 | 28 Upvotes 18 Comments
π The reason why Blub programmers have such a hard time picking up more powerful languages.
The hypothesis of linguistic relativity, part of relativism, also known as the SapirβWhorf hypothesis , or Whorfianism is a principle claiming that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language.
The principle is often defined in one of two versions: the strong hypothesis, which was held by some of the early linguists before World War II, and the weak hypothesis, mostly held by some of the modern linguists.
- The strong version says that language determines thought and that linguistic categories limit and determine cognitive categories.
- The weak version says that linguistic categories and usage only influence thought and decisions.
The principle had been accepted and then abandoned by linguists during the early 20th century following the changing perceptions of social acceptance for the other especially after World War II. The origin of formulated arguments against the acceptance of linguistic relativity are attributed to Noam Chomsky.
Discussed on
- "The reason why Blub programmers have such a hard time picking up more powerful languages." | 2007-09-29 | 7 Upvotes 28 Comments
π Chinese room argument
The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave. The argument was first presented by philosopher John Searle in his paper, "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980. It has been widely discussed in the years since. The centerpiece of the argument is a thought experiment known as the Chinese room.
The argument is directed against the philosophical positions of functionalism and computationalism, which hold that the mind may be viewed as an information-processing system operating on formal symbols. Specifically, the argument is intended to refute a position Searle calls strong AI: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
Although it was originally presented in reaction to the statements of artificial intelligence (AI) researchers, it is not an argument against the behavioural goals of AI research, because it does not limit the amount of intelligence a machine can display. The argument applies only to digital computers running programs and does not apply to machines in general.
Discussed on
- "Chinese room argument" | 2017-07-04 | 11 Upvotes 9 Comments