Topic: Futures studies (Page 2)

You are looking at all articles with the topic "Futures studies". We found 18 matches.

Hint: To view all topics, click here. Too see the most popular topics, click here instead.

πŸ”— Roko's Basilisk

πŸ”— Internet πŸ”— Internet culture πŸ”— Philosophy πŸ”— Futures studies

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development. It originated in a 2010 post at discussion board LessWrong, a technical forum focused on analytical rational enquiry. The thought experiment's name derives from the poster of the article (Roko) and the basilisk, a mythical creature capable of destroying enemies with its stare.

While the theory was initially dismissed as nothing but conjecture or speculation by many LessWrong users, LessWrong co-founder Eliezer Yudkowsky reported users who described symptoms such as nightmares and mental breakdowns upon reading the theory, due to its stipulation that knowing about the theory and its basilisk made one vulnerable to the basilisk itself. This led to discussion of the basilisk on the site to be banned for five years. However, these reports were later dismissed as being exaggerations or inconsequential, and the theory itself was dismissed as nonsense, including by Yudkowsky himself. Even after the post's discreditation, it is still used as an example of principles such as Bayesian probability and implicit religion. It is also regarded as a modern version of Pascal's wager. In the field of artificial intelligence, Roko's basilisk has become notable as an example that raises the question of how to create an AI which is simultaneously moral and intelligent.

Discussed on

πŸ”— Artificial Intelligence Act (EU Law)

πŸ”— International relations πŸ”— Technology πŸ”— Internet πŸ”— Computing πŸ”— Computer science πŸ”— Law πŸ”— Business πŸ”— Politics πŸ”— Robotics πŸ”— International relations/International law πŸ”— Futures studies πŸ”— European Union πŸ”— Science Policy πŸ”— Artificial Intelligence

The Artificial Intelligence Act (AI Act) is a European Union regulation concerning artificial intelligence (AI).

It establishes a common regulatory and legal framework for AI in the European Union (EU). Proposed by the European Commission on 21 April 2021, and then passed in the European Parliament on 13 March 2024, it was unanimously approved by the Council of the European Union on 21 May 2024. The Act creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's General Data Protection Regulation, the Act can apply extraterritorially to providers from outside the EU, if they have users within the EU.

It covers all types of AI in a broad range of sectors; exceptions include AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it would not confer rights on individuals, but would regulate the providers of AI systems and entities using AI in a professional context. The draft Act was revised following the rise in popularity of generative AI systems, such as ChatGPT, whose general-purpose capabilities did not fit the main framework. More restrictive regulations are planned for powerful generative AI systems with systemic impact.

The Act classifies AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. Applications with unacceptable risks are banned. High-risk applications must comply with security, transparency and quality obligations and undergo conformity assessments. Limited-risk applications only have transparency obligations and those representing minimal risks are not regulated. For general-purpose AI, transparency requirements are imposed, with additional evaluations when there are high risks.

La Quadrature du Net (LQDN) stated that the adopted version of the AI Act would be ineffective, arguing that the role of self-regulation and exemptions in the act rendered it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI".

Discussed on

πŸ”— Effective Accelerationism

πŸ”— Computer science πŸ”— Disaster management πŸ”— Philosophy πŸ”— Cognitive science πŸ”— Futures studies πŸ”— Effective Altruism

Effective accelerationism, often abbreviated as "e/acc", is a 21st-century philosophical movement advocating an explicit pro-technology stance. Its proponents believe that artificial intelligence-driven progress is a great social equalizer which should be pushed forward. They see themselves as a counterweight to the cautious view that AI is highly unpredictable and needs to be regulated, often giving their opponents the derogatory labels of "doomers" or "decels" (short for deceleration).

Central to effective accelerationism is the belief that propelling technological progress at any cost is the only ethically justifiable course of action. The movement carries utopian undertones and argues that humans need to develop and build faster to ensure their survival and propagate consciousness throughout the universe.

Although effective accelerationism has been described as a fringe movement, it has gained mainstream visibility. A number of high-profile Silicon Valley figures, including investors Marc Andreessen and Garry Tan, explicitly endorsed the movement by adding "e/acc" to their public social media profiles. Yann LeCun and Andrew Ng are seen as further supporters, as they have argued for less restrictive AI regulation.

Discussed on

πŸ”— Possible explanations for the slow progress of AI research

πŸ”— Computing πŸ”— Computer science πŸ”— Science Fiction πŸ”— Cognitive science πŸ”— Robotics πŸ”— Transhumanism πŸ”— Software πŸ”— Software/Computing πŸ”— Futures studies

Artificial general intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. (Some academic sources reserve the term "strong AI" for machines that can experience consciousness.)

Some authorities emphasize a distinction between strong AI and applied AI (also called narrow AI or weak AI): the use of software to study or accomplish specific problem solving or reasoning tasks. Weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities.

As of 2017, over forty organizations were doing research on AGI.

Discussed on

πŸ”— World War III

πŸ”— Military history πŸ”— Futures studies πŸ”— Military history/Cold War πŸ”— Cold War

World War III or the Third World War, often abbreviated as WWIII or WW3, are names given to a hypothetical third worldwide large-scale military conflict after World WarΒ I and World WarΒ II. The term has been in use since at least as early as 1941. Some apply it loosely to limited or more minor conflicts such as the Cold War or the war on terror. In contrast, others assume that such a conflict would surpass prior world wars in both scope and destructive impact.

Due to the development and use of nuclear weapons near the end of World WarΒ II and their subsequent acquisition and deployment by many countries, the potential risk of a nuclear apocalypse causing widespread destruction of Earth's civilization and life is a common theme in speculations about a Third World War. Another primary concern is that biological warfare could cause many casualties. It could happen intentionally or inadvertently, by an accidental release of a biological agent, the unexpected mutation of an agent, or its adaptation to other species after use. Large-scale apocalyptic events like these, caused by advanced technology used for destruction, could render Earth's surface uninhabitable.

Before the beginning of World War II in 1939, World WarΒ I (1914–1918) was believed to have been "the war to end [all] wars." It was popularly believed that never again could there possibly be a global conflict of such magnitude. During the interwar period, World War I was typically referred to simply as "The Great War". The outbreak of World WarΒ II disproved the hope that humanity might have already "outgrown" the need for such widespread global wars.

With the advent of the Cold War in 1945 and with the spread of nuclear weapons technology to the Soviet Union, the possibility of a third global conflict became more plausible. During the Cold War years, the possibility of a Third World War was anticipated and planned for by military and civil authorities in many countries. Scenarios ranged from conventional warfare to limited or total nuclear warfare. At the height of the Cold War, the doctrine of mutually assured destruction (MAD), which determined an all-out nuclear confrontation would destroy all of the states involved in the conflict, had been developed. The absolute potential destruction of the human race may have contributed to the ability of both American and Soviet leaders to avoid such a scenario.

Discussed on

πŸ”— The Limits to Growth (1972)

πŸ”— Climate change πŸ”— Environment πŸ”— Books πŸ”— Systems πŸ”— Futures studies πŸ”— Energy

The Limits to Growth (often abbreviated LTG) is a 1972 report that discussed the possibility of exponential economic and population growth with finite supply of resources, studied by computer simulation. The study used the World3 computer model to simulate the consequence of interactions between the Earth and human systems. The model was based on the work of Jay Forrester of MIT,:β€Š21β€Š as described in his book World Dynamics.

Commissioned by the Club of Rome, the study saw its findings first presented at international gatherings in Moscow and Rio de Janeiro in the summer of 1971.:β€Š186β€Š The report's authors are Donella H. Meadows, Dennis L. Meadows, JΓΈrgen Randers, and William W. Behrens III, representing a team of 17 researchers.:β€Š8β€Š

The report's findings suggest that, in the absence of significant alterations in resource utilization, it is highly likely that there will be an abrupt and unmanageable decrease in both population and industrial capacity. Despite the report's facing severe criticism and scrutiny upon its release, subsequent research consistently finds that the global use of natural resources has been inadequately reformed since to alter its basic predictions.

Since its publication, some 30 million copies of the book in 30 languages have been purchased. It continues to generate debate and has been the subject of several subsequent publications.

Beyond the Limits and The Limits to Growth: The 30-Year Update were published in 1992 and 2004 respectively; in 2012, a 40-year forecast from JΓΈrgen Randers, one of the book's original authors, was published as 2052: A Global Forecast for the Next Forty Years; and in 2022 two of the original Limits to Growth authors, Dennis Meadows and JΓΈrgen Randers, joined 19 other contributors to produce Limits and Beyond.

πŸ”— Pessimism Porn

πŸ”— Economics πŸ”— Futures studies

Pessimism porn is a neologism coined in 2009 during the 2007–2012 global financial crisis to describe the alleged eschatological and survivalist thrill some people derive from predicting, reading and fantasizing about the collapse of civil society through the destruction of the world's economic system.

Discussed on

πŸ”— Malthusian Catastrophe

πŸ”— Environment πŸ”— Agriculture πŸ”— Economics πŸ”— Futures studies

A Malthusian catastrophe (also known as Malthusian check, Malthusian crisis, Malthusian spectre or Malthusian crunch) occurs when population growth outpaces agricultural production.

Discussed on