Topic: Physics (Page 5)
You are looking at all articles with the topic "Physics". We found 181 matches.
Hint:
To view all topics, click here. Too see the most popular topics, click here instead.
π Oliver Heaviside
Oliver Heaviside FRS (; 18 May 1850 β 3 February 1925) was an English self-taught electrical engineer, mathematician, and physicist who adapted complex numbers to the study of electrical circuits, invented mathematical techniques for the solution of differential equations (equivalent to Laplace transforms), reformulated Maxwell's field equations in terms of electric and magnetic forces and energy flux, and independently co-formulated vector analysis. Although at odds with the scientific establishment for most of his life, Heaviside changed the face of telecommunications, mathematics, and science.
Discussed on
- "Oliver Heaviside" | 2014-12-26 | 100 Upvotes 25 Comments
π Emmy Noether
Discussed on
- "Emmy Noether" | 2022-04-08 | 18 Upvotes 1 Comments
- "Emmy Noether" | 2013-01-15 | 92 Upvotes 17 Comments
π Salters Duck
Salter's duck, also known as the nodding duck or by its official name the Edinburgh duck, is a device that converts wave power into electricity. The wave impact induces rotation of gyroscopes located inside a pear-shaped "duck", and an electrical generator converts this rotation into electricity with an overall efficiency of up to 90%. The Salter's duck was invented by Stephen Salter in response to the oil shortage in the 1970s and was one of the earliest generator designs proposed to the Wave Energy programme in the United Kingdom. The funding for the project was cut off in the early 1980s after oil prices rebounded and the UK government moved away from alternative energy sources. As of May 2018 no wave-power devices have ever gone into large-scale production.
Discussed on
- "Salters Duck" | 2015-01-30 | 84 Upvotes 39 Comments
π Black Hole Electron
In physics, there is a speculative hypothesis that, if there were a black hole with the same mass, charge and angular momentum as an electron, it would share other properties of the electron. Most notably, Brandon Carter showed in 1968 that the magnetic moment of such an object would match that of an electron. This is interesting because calculations ignoring special relativity and treating the electron as a small rotating sphere of charge give a magnetic moment roughly half the experimental value (see Gyromagnetic ratio).
However, Carter's calculations also show that a would-be black hole with these parameters would be "super-extremal". Thus, unlike a true black hole, this object would display a naked singularity, meaning a singularity in spacetime not hidden behind an event horizon. It would also give rise to closed timelike curves.
Standard quantum electrodynamics (QED), currently the most comprehensive theory of particles, treats the electron as a point particle. There is no evidence that the electron is a black hole (or naked singularity) or not. Furthermore, since the electron is quantum-mechanical in nature, any description purely in terms of general relativity is paradoxical until a better model based on understanding of quantum nature of blackholes and gravitational behaviour of quantum particles is developed by research. Hence, the idea of a black hole electron remains strictly hypothetical.
Discussed on
- "Black Hole Electron" | 2023-09-06 | 64 Upvotes 38 Comments
- "Black Hole Electron" | 2023-03-11 | 19 Upvotes 2 Comments
π Perturbation Theory
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter . The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
Discussed on
- "Perturbation Theory" | 2023-03-19 | 91 Upvotes 31 Comments
π History of the Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In physics-related problems, Monte Carlo methods are useful for simulating systems with many coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model, interacting particle systems, McKeanβVlasov processes, kinetic models of gases).
Other examples include modeling phenomena with significant uncertainty in inputs such as the calculation of risk in business and, in mathematics, evaluation of multidimensional definite integrals with complicated boundary conditions. In application to systems engineering problems (space, oil exploration, aircraft design, etc.), Monte Carloβbased predictions of failure, cost overruns and schedule overruns are routinely better than human intuition or alternative "soft" methods.
In principle, Monte Carlo methods can be used to solve any problem having a probabilistic interpretation. By the law of large numbers, integrals described by the expected value of some random variable can be approximated by taking the empirical mean (a.k.a. the sample mean) of independent samples of the variable. When the probability distribution of the variable is parameterized, mathematicians often use a Markov chain Monte Carlo (MCMC) sampler. The central idea is to design a judicious Markov chain model with a prescribed stationary probability distribution. That is, in the limit, the samples being generated by the MCMC method will be samples from the desired (target) distribution. By the ergodic theorem, the stationary distribution is approximated by the empirical measures of the random states of the MCMC sampler.
In other problems, the objective is generating draws from a sequence of probability distributions satisfying a nonlinear evolution equation. These flows of probability distributions can always be interpreted as the distributions of the random states of a Markov process whose transition probabilities depend on the distributions of the current random states (see McKeanβVlasov processes, nonlinear filtering equation). In other instances we are given a flow of probability distributions with an increasing level of sampling complexity (path spaces models with an increasing time horizon, BoltzmannβGibbs measures associated with decreasing temperature parameters, and many others). These models can also be seen as the evolution of the law of the random states of a nonlinear Markov chain. A natural way to simulate these sophisticated nonlinear Markov processes is to sample multiple copies of the process, replacing in the evolution equation the unknown distributions of the random states by the sampled empirical measures. In contrast with traditional Monte Carlo and MCMC methodologies, these mean-field particle techniques rely on sequential interacting samples. The terminology mean field reflects the fact that each of the samples (a.k.a. particles, individuals, walkers, agents, creatures, or phenotypes) interacts with the empirical measures of the process. When the size of the system tends to infinity, these random empirical measures converge to the deterministic distribution of the random states of the nonlinear Markov chain, so that the statistical interaction between particles vanishes.
Despite its conceptual and algorithmic simplicity, the computational cost associated with a Monte Carlo simulation can be staggeringly high. In general the method requires many samples to get a good approximation, which may incur an arbitrarily large total runtime if the processing time of a single sample is high. Although this is a severe limitation in very complex problems, the embarrassingly parallel nature of the algorithm allows this large cost to be reduced (perhaps to a feasible level) through parallel computing strategies in local processors, clusters, cloud computing, GPU, FPGA, etc.
Discussed on
- "History of the Monte Carlo method" | 2022-09-18 | 94 Upvotes 26 Comments
π Finite Element Method
The finite element method (FEM) is the most widely used method for solving problems of engineering and mathematical models. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. The FEM is a particular numerical method for solving partial differential equations in two or three space variables (i.e., some boundary value problems). To solve a problem, the FEM subdivides a large system into smaller, simpler parts that are called finite elements. This is achieved by a particular space discretisation in the space dimensions, which is implemented by the construction of a mesh of the object: the numerical domain for the solution, which has a finite number of points. The finite element method formulation of a boundary value problem finally results in a system of algebraic equations. The method approximates the unknown function over the domain. The simple equations that model these finite elements are then assembled into a larger system of equations that models the entire problem. The FEM then uses variational methods from the calculus of variations to approximate a solution by minimizing an associated error function.
Studying or analyzing a phenomenon with FEM is often referred to as finite element analysis (FEA).
Discussed on
- "Finite Element Method" | 2020-02-17 | 75 Upvotes 44 Comments
π Sonoluminescence
Sonoluminescence is the emission of short bursts of light from imploding bubbles in a liquid when excited by sound.
Discussed on
- "Sonoluminescence" | 2024-06-17 | 15 Upvotes 3 Comments
- "Sonoluminescence" | 2021-06-27 | 81 Upvotes 20 Comments
π Digital physics
In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is describable by information. It is a form of digital ontology about the physical reality. According to this theory, the universe can be conceived of as either the output of a deterministic or probabilistic computer program, a vast, digital computation device, or mathematically isomorphic to such a device.
Discussed on
- "Digital physics" | 2013-08-06 | 62 Upvotes 54 Comments
π Interplanetary Transport Network
The Interplanetary Transport Network (ITN) is a collection of gravitationally determined pathways through the Solar System that require very little energy for an object to follow. The ITN makes particular use of Lagrange points as locations where trajectories through space are redirected using little or no energy. These points have the peculiar property of allowing objects to orbit around them, despite lacking an object to orbit. While it would use little energy, transport along the network would take a long time.
Discussed on
- "Interplanetary Transport Network" | 2014-11-08 | 98 Upvotes 18 Comments