source,text https://en.wikipedia.org/wiki/Sherman%20trap,"The Sherman trap is a box-style animal trap designed for the live capture of small mammals. It was invented by Dr. H. B. Sherman in the 1920s and became commercially available in 1955. Since that time, the Sherman trap has been used extensively by researchers in the biological sciences for capturing animals such as mice, voles, shrews, and chipmunks. The Sherman trap consists of eight hinged pieces of sheet metal (either galvanized steel or aluminum) that allow the trap to be collapsed for storage or transport. Sherman traps are often set in grids and may be baited with grains and seed. Description The hinged design allows the trap to fold up flat into something only the width of one side panel. This makes it compact for storage and easy to transport to field locations (e.g. in a back pack). Both ends are hinged, but in normal operation the rear end is closed and the front folds inwards and latches the treadle, trigger plate, in place. When an animal enters far enough to be clear of the front door, their weight releases the latch and the door closes behind them. The lure or bait is placed at the far end and can be dropped in place through the rear hinged door. Variants Later, other variants that built upon the basic design, appeared - such as the Elliott trap used in Europe and Australasia. The Elliott trap has simplified the design slightly and is made from just 7 hinged panels." https://en.wikipedia.org/wiki/Symbolic%20language%20%28mathematics%29,"In mathematics, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations, expressions, and statements, and the entities or operands on which the operations are performed. See also Formal language Language of mathematics List of mathematical symbols Mathematical Alphanumeric Symbols Mathematical notation Notation (general) Symbolic language (other)" https://en.wikipedia.org/wiki/List%20of%20circle%20topics,"This list of circle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or concretely in physical space. It does not include metaphors like ""inner circle"" or ""circular reasoning"" in which the word does not refer literally to the geometric shape. Geometry and other areas of mathematics Circle Circle anatomy Annulus (mathematics) Area of a disk Bipolar coordinates Central angle Circular sector Circular segment Circumference Concentric Concyclic Degree (angle) Diameter Disk (mathematics) Horn angle Measurement of a Circle List of topics related to Pole and polar Power of a point Radical axis Radius Radius of convergence Radius of curvature Sphere Tangent lines to circles Versor Specific circles Apollonian circles Circles of Apollonius Archimedean circle Archimedes' circles – the twin circles doubtfully attributed to Archimedes Archimedes' quadruplets Circle of antisimilitude Bankoff circle Brocard circle Carlyle circle Circumscribed circle (circumcircle) Midpoint-stretching polygon Coaxal circles Director circle Fermat–Apollonius circle Ford circle Fuhrmann circle Generalised circle GEOS circle Great circle Great-circle distance Circle of a sphere Horocycle Incircle and excircles of a triangle Inscribed circle Johnson circles Magic circle (mathematics) Malfatti circles Nine-point circle Orthocentroidal circle Osculating circle Riemannian circle Schinzel circle Schoch circles Spieker circle Tangent circles Twin circles Unit circle Van Lamoen circle Villarceau circles Woo circles Circle-derived entities Apollonian gasket Arbelos Bicentric polygon Bicentric quadrilateral Coxeter's loxodromic sequence of tangent circles Cyclic quadrilateral Cycloid Ex-tangential quadrilateral Hawaiian earring Inscribed angle Inscribed angle theorem Inversive distance Inversive geometry Irrational rotation Lens (geometry) Lune Lune of " https://en.wikipedia.org/wiki/Sagrada%20Fam%C3%ADlia,"The Basílica i Temple Expiatori de la Sagrada Família, shortened as the Sagrada Família, is an under construction church in the Eixample district of Barcelona, Catalonia, Spain. It is the largest unfinished Catholic church in the world. Designed by architect Antoni Gaudí (1852–1926), his work on Sagrada Família is part of a UNESCO World Heritage Site. On 7 November 2010, Pope Benedict XVI consecrated the church and proclaimed it a minor basilica. On 19 March 1882, construction of the Sagrada Família began under architect Francisco de Paula del Villar. In 1883, when Villar resigned, Gaudí took over as chief architect, transforming the project with his architectural and engineering style, combining Gothic and curvilinear Art Nouveau forms. Gaudí devoted the remainder of his life to the project, and he is buried in the church's crypt. At the time of his death in 1926, less than a quarter of the project was complete. Relying solely on private donations, the Sagrada Família's construction progressed slowly and was interrupted by the Spanish Civil War. In July 1936, anarchists from the FAI set fire to the crypt and broke their way into the workshop, partially destroying Gaudí's original plans. In 1939, Francesc de Paula Quintana took over site management, which was able to go on due to the material that was saved from Gaudí's workshop and that was reconstructed from published plans and photographs. Construction resumed to intermittent progress in the 1950s. Advancements in technologies such as computer-aided design and computerised numerical control (CNC) have since enabled faster progress and construction passed the midpoint in 2010. However, some of the project's greatest challenges remain, including the construction of ten more spires, each symbolising an important Biblical figure in the New Testament. It was anticipated that the building would be completed by 2026, the centenary of Gaudí's death, but this has now been delayed due to the COVID-19 pandemic. Some aspec" https://en.wikipedia.org/wiki/Continuous%20availability,"Continuous availability is an approach to computer system and application design that protects users against downtime, whatever the cause and ensures that users remain connected to their documents, data files and business applications. Continuous availability describes the information technology methods to ensure business continuity. In early days of computing, availability was not considered business critical. With the increasing use of mobile computing, global access to online business transactions and business-to-business communication, continuous availability is increasingly important based on the need to support customer access to information systems. Solutions to continuous availability exists in different forms and implementations depending on the software and hardware manufacturer. The goal of the discipline is to reduce the user or business application downtime, which can have a severe impact on business operations. Inevitably, such downtime can lead to loss of productivity, loss of revenue, customer dissatisfaction and ultimately can damage a company's reputation. Degrees of availability The terms high availability, continuous operation, and continuous availability are generally used to express how available a system is. The following is a definition of each of these terms. High availability refers to the ability to avoid unplanned outages by eliminating single points of failure. This is a measure of the reliability of the hardware, operating system, middleware, and database manager software. Another measure of high availability is the ability to minimize the effect of an unplanned outage by masking the outage from the end users. This can be accomplished by providing redundancy or quickly restarting failed components. Availability is usually expressed as a percentage of uptime in a given year: When defining such a percentage it needs to be specified if it applies to the hardware, the IT infrastructure or the business application on top. Continuou" https://en.wikipedia.org/wiki/PBASIC,"PBASIC is a microcontroller-based version of BASIC created by Parallax, Inc. in 1992. PBASIC was created to bring ease of use to the microcontroller and embedded processor world. It is used for writing code for the BASIC Stamp microcontrollers. After the code is written, it is tokenized and loaded into an EEPROM on the microcontroller. These tokens are fetched by the microcontroller and used to generate instructions for the processor. Syntax When starting a PBASIC file, the programmer defines the version of the BASIC Stamp and the version of PBASIC that will be used. Variables and constants are usually declared first thing in a program. The DO LOOP, FOR NEXT loop, IF and ENDIF, and some standard BASIC commands are part of the language, but many commands like PULSOUT, HIGH, LOW, DEBUG, and FREQOUT are native to PBASIC and are used for special purposes that are not available in traditional BASIC (such as having the Basic Stamp ring a piezoelectric speaker, for example). Programming In the Stamp Editor, the PBASIC integrated development environment (IDE) running on a (Windows) PC, the programmer has to select 1 of 7 different basic stamps, BS1, BS2, BS2E, BS2SX, BS2P, BS2PE, and BS2PX, which is done by using one of these commands: ' {$STAMP BS1} ' {$STAMP BS2} ' {$STAMP BS2e} ' {$STAMP BS2sx} ' {$STAMP BS2p} ' {$STAMP BS2pe} ' {$STAMP BS2px} The programmer must also select which PBASIC version to use, which he or she may express with commands such as these: ' {$PBASIC 1.0} ' use version 1.0 syntax (BS1 only) ' {$PBASIC 2.0} ' use version 2.0 syntax ' {$PBASIC 2.5} ' use version 2.5 syntax An example of a program using HIGH and LOW to make an LED blink, along with a DO...LOOP would be: DO HIGH 1 'turn LED on I/O pin 1 on PAUSE 1000 'keep it on for 1 second LOW 1 'turn it off PAUSE 500 'keep it off for 500 msec LOOP 'repeat forever An example of a pr" https://en.wikipedia.org/wiki/Monounsaturated%20fat,"In biochemistry and nutrition, a monounsaturated fat is a fat that contains a monounsaturated fatty acid (MUFA), a subclass of fatty acid characterized by having a double bond in the fatty acid chain with all of the remaining carbon atoms being single-bonded. By contrast, polyunsaturated fatty acids (PUFAs) have more than one double bond. Molecular description Monounsaturated fats are triglycerides containing one unsaturated fatty acid. Almost invariably that fatty acid is oleic acid (18:1 n−9). Palmitoleic acid (16:1 n−7) and cis-vaccenic acid (18:1 n−7) occur in small amounts in fats. Health Studies have shown that substituting dietary monounsaturated fat for saturated fat is associated with increased daily physical activity and resting energy expenditure. More physical activity was associated with a higher-oleic acid diet than one of a palmitic acid diet. From the study, it is shown that more monounsaturated fats lead to less anger and irritability. Foods containing monounsaturated fats may affect low-density lipoprotein (LDL) cholesterol and high-density lipoprotein (HDL) cholesterol. Levels of oleic acid along with other monounsaturated fatty acids in red blood cell membranes were positively associated with breast cancer risk. The saturation index (SI) of the same membranes was inversely associated with breast cancer risk. Monounsaturated fats and low SI in erythrocyte membranes are predictors of postmenopausal breast cancer. Both of these variables depend on the activity of the enzyme delta-9 desaturase (Δ9-d). In children, consumption of monounsaturated oils is associated with healthier serum lipid profiles. The Mediterranean diet is one heavily influenced by monounsaturated fats. People in Mediterranean countries consume more total fat than Northern European countries, but most of the fat is in the form of monounsaturated fatty acids from olive oil and omega-3 fatty acids from fish, vegetables, and certain meats like lamb, while consumption of satur" https://en.wikipedia.org/wiki/Metaproteomics,"Metaproteomics (also Community Proteomics, Environmental Proteomics, or Community Proteogenomics) is an umbrella term for experimental approaches to study all proteins in microbial communities and microbiomes from environmental sources. Metaproteomics is used to classify experiments that deal with all proteins identified and quantified from complex microbial communities. Metaproteomics approaches are comparable to gene-centric environmental genomics, or metagenomics. Origin of the term The term ""metaproteomics"" was proposed by Francisco Rodríguez-Valera to describe the genes and/or proteins most abundantly expressed in environmental samples. The term was derived from ""metagenome"". Wilmes and Bond proposed the term ""metaproteomics"" for the large-scale characterization of the entire protein complement of environmental microbiota at a given point in time. At the same time, the terms ""microbial community proteomics"" and ""microbial community proteogenomics"" are sometimes used interchangeably for different types of experiments and results. Questions Addressed by Metaproteomics Metaproteomics allows for scientists to better understand organisms' gene functions, as genes in DNA are transcribed to mRNA which is then translated to protein. Gene expression changes can therefore be monitored through this method. Furthermore, proteins represent cellular activity and structure, so using metaproteomics in research can lead to functional information at the molecular level. Metaproteomics can also be used as a tool to assess the composition of a microbial community in terms of biomass contributions of individual members species in the community and can thus complement approaches that assess community composition based on gene copy counts such as 16S rRNA gene amplicon or metagenome sequencing. Proteomics of microbial communities The first proteomics experiment was conducted with the invention of two-dimensional polyacrylamide gel electrophoresis (2D-PAGE). The 1980s and 1990" https://en.wikipedia.org/wiki/Latch-up,"In electronics, a latch-up is a type of short circuit which can occur in an integrated circuit (IC). More specifically, it is the inadvertent creation of a low-impedance path between the power supply rails of a MOSFET circuit, triggering a parasitic structure which disrupts proper functioning of the part, possibly even leading to its destruction due to overcurrent. A power cycle is required to correct this situation. The parasitic structure is usually equivalent to a thyristor (or SCR), a PNPN structure which acts as a PNP and an NPN transistor stacked next to each other. During a latch-up when one of the transistors is conducting, the other one begins conducting too. They both keep each other in saturation for as long as the structure is forward-biased and some current flows through it - which usually means until a power-down. The SCR parasitic structure is formed as a part of the totem-pole PMOS and NMOS transistor pair on the output drivers of the gates. The latch-up does not have to happen between the power rails - it can happen at any place where the required parasitic structure exists. A common cause of latch-up is a positive or negative voltage spike on an input or output pin of a digital chip that exceeds the rail voltage by more than a diode drop. Another cause is the supply voltage exceeding the absolute maximum rating, often from a transient spike in the power supply. It leads to a breakdown of an internal junction. This frequently happens in circuits which use multiple supply voltages that do not come up in the required sequence on power-up, leading to voltages on data lines exceeding the input rating of parts that have not yet reached a nominal supply voltage. Latch-ups can also be caused by an electrostatic discharge event. Another common cause of latch-ups is ionizing radiation which makes this a significant issue in electronic products designed for space (or very high-altitude) applications. A single event latch-up is a latch-up caused by a si" https://en.wikipedia.org/wiki/Circuit%20design,"The process of circuit design can cover systems ranging from complex electronic systems down to the individual transistors within an integrated circuit. One person can often do the design process without needing a planned or structured design process for simple circuits. Still, teams of designers following a systematic approach with intelligently guided computer simulation are becoming increasingly common for more complex designs. In integrated circuit design automation, the term ""circuit design"" often refers to the step of the design cycle which outputs the schematics of the integrated circuit. Typically this is the step between logic design and physical design. Process Traditional circuit design usually involves several stages. Sometimes, a design specification is written after liaising with the customer. A technical proposal may be written to meet the requirements of the customer specification. The next stage involves synthesising on paper a schematic circuit diagram, an abstract electrical or electronic circuit that will meet the specifications. A calculation of the component values to meet the operating specifications under specified conditions should be made. Simulations may be performed to verify the correctness of the design. A breadboard or other prototype version of the design for testing against specification may be built. It may involve making any alterations to the circuit to achieve compliance. A choice as to a method of construction and all the parts and materials to be used must be made. There is a presentation of component and layout information to draughtspersons and layout and mechanical engineers for prototype production. This is followed by the testing or type-testing several prototypes to ensure compliance with customer requirements. Usually, there is a signing and approval of the final manufacturing drawings, and there may be post-design services (obsolescence of components, etc.). Specification The process of circuit design begins" https://en.wikipedia.org/wiki/List%20of%20Runge%E2%80%93Kutta%20methods,"Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation Explicit Runge–Kutta methods take the form Stages for implicit methods of s stages take the more general form, with the solution to be found over all s Each method listed on this page is defined by its Butcher tableau, which puts the coefficients of the method in a table as follows: For adaptive and implicit methods, the Butcher tableau is extended to give values of , and the estimated error is then . Explicit methods The explicit methods are those where the matrix is lower triangular. Forward Euler The Euler method is first order. The lack of stability and accuracy limits its popularity mainly to use as a simple introductory example of a numeric solution method. Explicit midpoint method The (explicit) midpoint method is a second-order method with two stages (see also the implicit midpoint method below): Heun's method Heun's method is a second-order method with two stages. It is also known as the explicit trapezoid rule, improved Euler's method, or modified Euler's method. (Note: The ""eu"" is pronounced the same way as in ""Euler"", so ""Heun"" rhymes with ""coin""): Ralston's method Ralston's method is a second-order method with two stages and a minimum local error bound: Generic second-order method Kutta's third-order method Generic third-order method See Sanderse and Veldman (2019). for α ≠ 0, , 1: Heun's third-order method Van der Houwen's/Wray third-order method Ralston's third-order method Ralston's third-order method is used in the embedded Bogacki–Shampine method. Third-order Strong Stability Preserving Runge-Kutta (SSPRK3) Classic fourth-order method The ""original"" Runge–Kutta method. 3/8-rule fourth-order method This method doesn't have as much notoriety as the ""classic"" method, but is just as classic because it was proposed in the same paper (Kutta, 1901). Ralston's fourth-order method This fourth order method has minimum truncation er" https://en.wikipedia.org/wiki/Jeremy%20Burroughes,"Jeremy Henley Burroughes (born August 1960) is a British physicist and engineer, known for his contributions to the development of organic electronics through his work on the science of semiconducting polymers and molecules and their application. He is the Chief Technology Officer of Cambridge Display Technology, a company specialising in the development of technologies based on polymer light-emitting diodes. Education Burroughes earned his PhD from the University of Cambridge in 1989. His thesis was entitled The physical processes in organic semiconducting polymer devices. Work Early in his career, Burroughes discovered that certain conjugated polymers were capable of emitting light when an electric current passed through them. The discovery of this previously unknown form of electroluminescence led to the foundation of Cambridge Display Technology where Burroughes has been responsible for a number of technology innovations, including the direct printing of full-colour OLED displays. Awards and honours Burroughes was elected a Fellow of the Royal Society (FRS) in 2012. His certificate of election reads:" https://en.wikipedia.org/wiki/Transcellular%20transport,"Transcellular transport involves the transportation of solutes by a cell through a cell. Transcellular transport can occur in three different ways active transport, passive transport, and transcytosis. Active Transport Main article: Active transport Active transport is the process of moving molecules from an area of low concentrations to an area of high concentration. There are two types of active transport, primary active transport and secondary active transport. Primary active transport uses adenosine triphosphate (ATP) to move specific molecules and solutes against its concentration gradient. Examples of molecules that follow this process are potassium K+, sodium Na+, and calcium Ca2+. A place in the human body where this occurs is in the intestines with the uptake of glucose. Secondary active transport is when one solute moves down the electrochemical gradient to produce enough energy to force the transport of another solute from low concentration to high concentration.  An example of where this occurs is in the movement of glucose within the proximal convoluted tubule (PCT). Passive Transport Main article: Passive transport Passive transport is the process of moving molecules from an area of high concentration to an area of low concentration without expelling any energy. There are two types of passive transport, passive diffusion and facilitated diffusion. Passive diffusion is the unassisted movement of molecules from high concentration to low concentration across a permeable membrane. One example of passive diffusion is the gas exchange that occurs between the oxygen in the blood and the carbon dioxide present in the lungs. Facilitated diffusion is the movement of polar molecules down the concentration gradient with the assistance of membrane proteins. Since the molecules associated with facilitated diffusion are polar, they are repelled by the hydrophobic sections of permeable membrane, therefore they need to be assisted by the membrane proteins. Both t" https://en.wikipedia.org/wiki/Structural%20induction,"Structural induction is a proof method that is used in mathematical logic (e.g., in the proof of Łoś' theorem), computer science, graph theory, and some other mathematical fields. It is a generalization of mathematical induction over natural numbers and can be further generalized to arbitrary Noetherian induction. Structural recursion is a recursion method bearing the same relationship to structural induction as ordinary recursion bears to ordinary mathematical induction. Structural induction is used to prove that some proposition holds for all of some sort of recursively defined structure, such as formulas, lists, or trees. A well-founded partial order is defined on the structures (""subformula"" for formulas, ""sublist"" for lists, and ""subtree"" for trees). The structural induction proof is a proof that the proposition holds for all the minimal structures and that if it holds for the immediate substructures of a certain structure , then it must hold for also. (Formally speaking, this then satisfies the premises of an axiom of well-founded induction, which asserts that these two conditions are sufficient for the proposition to hold for all .) A structurally recursive function uses the same idea to define a recursive function: ""base cases"" handle each minimal structure and a rule for recursion. Structural recursion is usually proved correct by structural induction; in particularly easy cases, the inductive step is often left out. The length and ++ functions in the example below are structurally recursive. For example, if the structures are lists, one usually introduces the partial order ""<"", in which whenever list is the tail of list . Under this ordering, the empty list is the unique minimal element. A structural induction proof of some proposition then consists of two parts: A proof that is true and a proof that if is true for some list , and if is the tail of list , then must also be true. Eventually, there may exist more than one base case " https://en.wikipedia.org/wiki/Prony%27s%20method,"Prony analysis (Prony's method) was developed by Gaspard Riche de Prony in 1795. However, practical use of the method awaited the digital computer. Similar to the Fourier transform, Prony's method extracts valuable information from a uniformly sampled signal and builds a series of damped complex exponentials or damped sinusoids. This allows the estimation of frequency, amplitude, phase and damping components of a signal. The method Let be a signal consisting of evenly spaced samples. Prony's method fits a function to the observed . After some manipulation utilizing Euler's formula, the following result is obtained, which allows more direct computation of terms: where are the eigenvalues of the system, are the damping components, are the angular-frequency components, are the phase components, are the amplitude components of the series, is the imaginary unit (). Representations Prony's method is essentially a decomposition of a signal with complex exponentials via the following process: Regularly sample so that the -th of samples may be written as If happens to consist of damped sinusoids, then there will be pairs of complex exponentials such that where Because the summation of complex exponentials is the homogeneous solution to a linear difference equation, the following difference equation will exist: The key to Prony's Method is that the coefficients in the difference equation are related to the following polynomial: These facts lead to the following three steps within Prony's method: 1) Construct and solve the matrix equation for the values: Note that if , a generalized matrix inverse may be needed to find the values . 2) After finding the values, find the roots (numerically if necessary) of the polynomial The -th root of this polynomial will be equal to . 3) With the values, the values are part of a system of linear equations that may be used to solve for the values: where unique values are used. It is possible to " https://en.wikipedia.org/wiki/Table%20of%20divisors,"The tables below list all of the divisors of the numbers 1 to 1000. A divisor of an integer n is an integer m, for which n/m is again an integer (which is necessarily also a divisor of n). For example, 3 is a divisor of 21, since 21/7 = 3 (and therefore 7 is also a divisor of 21). If m is a divisor of n then so is −m. The tables below only list positive divisors. Key to the tables d(n) is the number of positive divisors of n, including 1 and n itself σ(n) is the sum of the positive divisors of n, including 1 and n itself s(n) is the sum of the proper divisors of n, including 1, but not n itself; that is, s(n) = σ(n) − n a deficient number is greater than the sum of its proper divisors; that is, s(n) < n a perfect number equals the sum of its proper divisors; that is, s(n) = n an abundant number is lesser than the sum of its proper divisors; that is, s(n) > n a highly abundant number has a sum of positive divisors greater than any lesser number's sum of positive divisors; that is, s(n) > s(m) for every positive integer m < n. Counterintuitively, the first seven highly abundant numbers are not abundant numbers. a prime number has only 1 and itself as divisors; that is, d(n) = 2. Prime numbers are always deficient as s(n)=1. a composite number has more than just 1 and itself as divisors; that is, d(n) > 2 a highly composite number has more divisors than any lesser number; that is, d(n) > d(m) for every positive integer m < n. Counterintuitively, the first two highly composite numbers are not composite numbers. a superior highly composite number has more divisors than any other number scaled relative to some positive power of the number itself; that is, there exists some ε such that for every other positive integer m. Superior highly composite numbers are always highly composite numbers. a weird number is an abundant number that is not semiperfect; that is, no subset of the proper divisors of n sum to n 1 to 100 101 to 200 201 to 300 301 to 400 401 to 50" https://en.wikipedia.org/wiki/Comparison%20of%20streaming%20media%20software,"This is a comparison of streaming media systems. A more complete list of streaming media systems is also available. General The following tables compare general and technical information for a number of streaming media systems both audio and video. Please see the individual systems' linked articles for further information. Operating system support Container format support Information about what digital container formats are supported. Protocol support Information about which internet protocols are supported for broadcasting streaming media content. Features See also Community radio Comparison of video services Content delivery network Digital television Electronic commerce Internet radio Internet radio device Internet television IPTV List of Internet radio stations List of music streaming services Multicast P2PTV Protection of Broadcasts and Broadcasting Organizations Treaty Push technology Streaming media Ustream Webcast Web television" https://en.wikipedia.org/wiki/Spiral%20plater,"A spiral plater is an instrument used to dispense a liquid sample onto a Petri dish in a spiral pattern. Commonly used as part of a CFU count procedure for the purpose of determining the number of microbes in the sample. In this setting, after spiral plating, the Petri dish is incubated for several hours after which the number of colony forming microbes (CFU) is determined. Spiral platers are also used for research, clinical diagnostics and as a method for covering a Petri dish with bacteria before placing antibiotic discs for AST. Mode of action The spiral plater rotates the dish while simultaneously dispensing the liquid and either linearly moving the dish or the dispensing tip. This creates the common spiral pattern. If all movements are done in constant speed, the spiral created would have a lower concentration on the outside of the plate than on the inside. More advanced spiral platers provide different options for spiral patterns such as constant concentration (by slowing down the spinning and / or the lateral movements) or exponential concentration (by speeding up the spinning and / or the lateral movements). In food and cosmetic testing Spiral plating is used extensively for microbiological testing of food, milk and milk products and cosmetics. It is an approved method by the FDA. The advantage of spiral plating is less plates used versus plating manually because different concentrations are present on each plate. This also makes it harder to count the colonies and requires special techniques and equipment. Stand-alone vs. Add-on Spiral platers are either available as stand-alone instruments that are fed manually with plates and samples or fed automatically using dedicated stackers. Alternatively spiral platers are available as integrated devices as part of larger automated platforms. In this case a larger workflow is often automated, e.g. plating, incubation and counting." https://en.wikipedia.org/wiki/Chip%20art,"Chip art, also known as silicon art, chip graffiti or silicon doodling, refers to microscopic artwork built into integrated circuits, also called chips or ICs. Since ICs are printed by photolithography, not constructed a component at a time, there is no additional cost to include features in otherwise unused space on the chip. Designers have used this freedom to put all sorts of artwork on the chips themselves, from designers' simple initials to rather complex drawings. Given the small size of chips, these figures cannot be seen without a microscope. Chip graffiti is sometimes called the hardware version of software easter eggs. Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived. A 1984 revision of the US copyright law (the Semiconductor Chip Protection Act of 1984) made all chip masks automatically copyrighted, with exclusive rights to the creator, and similar rules apply in most other countries that manufacture ICs. Since an exact copy is now automatically a copyright violation, the doodles serve no useful purpose. Creating chip art Integrated Circuits are constructed from multiple layers of material, typically silicon, silicon dioxide (glass), and aluminum. The composition and thickness of these layers give them their distinctive color and appearance. These elements created an irresistible palette for IC design and layout engineers. The creative process involved in the design of these chips, a strong sense of pride in their work, and an artistic temperament combined compels people to want to mark their work as their own. It is very common to find initials, or groups of initials on chips. This is the design engineer's way of ""signing"" his or her work. Often this creative artist's instinct extends to the inclusion of small pictures or icons" https://en.wikipedia.org/wiki/Canonical%20form,"In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between ""canonical"" and ""normal"" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness. The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero. More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix. In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object. In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations. Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. " https://en.wikipedia.org/wiki/Bioactive%20terrarium,"A bioactive terrarium (or vivarium) is a terrarium for housing one or more terrestrial animal species that includes live plants and populations of small invertebrates and microorganisms to consume and break down the waste products of the primary species. In a functional bioactive terrarium, the waste products will be broken down by these detritivores, reducing or eliminating the need for cage cleaning. Bioactive vivariums are used by zoos and hobbyists to house reptiles and amphibians in an aesthetically pleasing and enriched environment. Enclosure Any terrarium can be made bioactive by addition of the appropriate substrate, plants, and detritivores. Bioactive enclosures are often maintained as display terraria constructed of PVC, wood, glass and/or acrylic. Bioactive enclosures in laboratory ""rack"" style caging are uncommon. Cleanup crew Waste products of the primary species are consumed by a variety of detritivores, referred to as the ""cleanup crew"" by hobbyists. These can include woodlice, springtails, earthworms, millipedes, and various beetles, with different species being preferred in different habitats - the cleanup crew for a tropical rainforest bioactive terrarium may rely primarily on springtails, isopods, and earthworms, while a desert habitat might use beetles. If the primary species is insectivorous, they may consume the cleanup crew, and thus the cleanup crew must have sufficient retreats to avoid being completely depopulated. Additionally, bioactive terraria typically have a flourishing population of bacteria and other microorganisms which break down the wastes of the cleanup crew and primary species. Fungi may occur as part of the terrarium cycle and will be consumed by the cleanup crew. Substrate Bioactive enclosures require some form of substrate to grow plants and to provide habitat for the cleanup crew. The choice of substrate is typically determined by the habitat of the primary species (e.g. jungle vs desert), and created by mixing a v" https://en.wikipedia.org/wiki/Undersampling,"In signal processing, undersampling or bandpass sampling is a technique where one samples a bandpass-filtered signal at a sample rate below its Nyquist rate (twice the upper cutoff frequency), but is still able to reconstruct the signal. When one undersamples a bandpass signal, the samples are indistinguishable from the samples of a low-frequency alias of the high-frequency signal. Such sampling is also known as bandpass sampling, harmonic sampling, IF sampling, and direct IF-to-digital conversion. Description The Fourier transforms of real-valued functions are symmetrical around the 0 Hz axis. After sampling, only a periodic summation of the Fourier transform (called discrete-time Fourier transform) is still available. The individual frequency-shifted copies of the original transform are called aliases. The frequency offset between adjacent aliases is the sampling-rate, denoted by fs. When the aliases are mutually exclusive (spectrally), the original transform and the original continuous function, or a frequency-shifted version of it (if desired), can be recovered from the samples. The first and third graphs of Figure 1 depict a baseband spectrum before and after being sampled at a rate that completely separates the aliases. The second graph of Figure 1 depicts the frequency profile of a bandpass function occupying the band (A, A+B) (shaded blue) and its mirror image (shaded beige). The condition for a non-destructive sample rate is that the aliases of both bands do not overlap when shifted by all integer multiples of fs. The fourth graph depicts the spectral result of sampling at the same rate as the baseband function. The rate was chosen by finding the lowest rate that is an integer sub-multiple of A and also satisfies the baseband Nyquist criterion: fs > 2B.  Consequently, the bandpass function has effectively been converted to baseband. All the other rates that avoid overlap are given by these more general criteria, where A and A+B are replaced" https://en.wikipedia.org/wiki/Mean-field%20theory,"In physics and probability theory, Mean-field theory (MFT) or Self-consistent field theory studies the behavior of high-dimensional random (stochastic) models by studying a simpler model that approximates the original by averaging over degrees of freedom (the number of values in the final calculation of a statistic that are free to vary). Such models consider many individual components that interact with each other. The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field. This reduces any many-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a lower computational cost. MFT has since been applied to a wide range of fields outside of physics, including statistical inference, graphical models, neuroscience, artificial intelligence, epidemic models, queueing theory, computer-network performance and game theory, as in the quantal response equilibrium. Origins The idea first appeared in physics (statistical mechanics) in the work of Pierre Curie and Pierre Weiss to describe phase transitions. MFT has been used in the Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory. Systems with many (sometimes infinite) degrees of freedom are generally hard to solve exactly or compute in closed, analytic form, except for some simple cases (e.g. certain Gaussian random-field theories, the 1D Ising model). Often combinatorial problems arise that make things like computing the partition function of a system difficult. MFT is an approximation method that often makes the original solvable and open to calculation, and in some cases MFT may give very accurate approximations. In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the fi" https://en.wikipedia.org/wiki/Counterexample,"A counterexample is any exception to a generalization. In logic a counterexample disproves the generalization, and does so rigorously in the fields of mathematics and philosophy. For example, the fact that ""student John Smith is not lazy"" is a counterexample to the generalization ""students are lazy"", and both a counterexample to, and disproof of, the universal quantification ""all students are lazy."" In mathematics, the term ""counterexample"" is also used (by a slight abuse) to refer to examples which illustrate the necessity of the full hypothesis of a theorem. This is most often done by considering a case where a part of the hypothesis is not satisfied and the conclusion of the theorem does not hold. In mathematics In mathematics, counterexamples are often used to prove the boundaries of possible theorems. By using counterexamples to show that certain conjectures are false, mathematical researchers can then avoid going down blind alleys and learn to modify conjectures to produce provable theorems. It is sometimes said that mathematical development consists primarily in finding (and proving) theorems and counterexamples. Rectangle example Suppose that a mathematician is studying geometry and shapes, and she wishes to prove certain theorems about them. She conjectures that ""All rectangles are squares"", and she is interested in knowing whether this statement is true or false. In this case, she can either attempt to prove the truth of the statement using deductive reasoning, or she can attempt to find a counterexample of the statement if she suspects it to be false. In the latter case, a counterexample would be a rectangle that is not a square, such as a rectangle with two sides of length 5 and two sides of length 7. However, despite having found rectangles that were not squares, all the rectangles she did find had four sides. She then makes the new conjecture ""All rectangles have four sides"". This is logically weaker than her original conjecture, since every squa" https://en.wikipedia.org/wiki/Popularity,"In sociology, popularity is how much a person, idea, place, item or other concept is either liked or accorded status by other people. Liking can be due to reciprocal liking, interpersonal attraction, and similar factors. Social status can be due to dominance, superiority, and similar factors. For example, a kind person may be considered likable and therefore more popular than another person, and a wealthy person may be considered superior and therefore more popular than another person. There are two primary types of interpersonal popularity: perceived and sociometric. Perceived popularity is measured by asking people who the most popular or socially important people in their social group are. Sociometric popularity is measured by objectively measuring the number of connections a person has to others in the group. A person can have high perceived popularity without having high sociometric popularity, and vice versa. According to psychologist Tessa Lansu at the Radboud University Nijmegen, ""Popularity [has] to do with being the middle point of a group and having influence on it."" Introduction The term popularity is borrowed from the Latin term popularis, which originally meant ""common."" The current definition of the word popular, the ""fact or condition of being well liked by the people"", was first seen in 1601. While popularity is a trait often ascribed to an individual, it is an inherently social phenomenon and thus can only be understood in the context of groups of people. Popularity is a collective perception, and individuals report the consensus of a group's feelings towards an individual or object when rating popularity. It takes a group of people to like something, so the more that people advocate for something or claim that someone is best liked, the more attention it will get, and the more popular it will be deemed. Notwithstanding the above, popularity as a concept can be applied, assigned, or directed towards objects such as songs, movies, websites, a" https://en.wikipedia.org/wiki/Mutualism%20Parasitism%20Continuum,"The hypothesis or paradigm of Mutualism Parasitism Continuum postulates that compatible host-symbiont associations can occupy a broad continuum of interactions with different fitness outcomes for each member. At one end of the continuum lies obligate mutualism where both host and symbiont benefit from the interaction and are dependent on it for survival. At the other end of the continuum highly parasitic interactions can occur, where one member gains a fitness benefit at the expense of the others survival. Between these extremes many different types of interaction are possible. The degree of change between mutualism or parasitism varies depending on the availability of resources, where there is environmental stress generated by few resources, symbiotic relationships are formed while in environments where there is an excess of resources, biological interactions turn to competition and parasitism. Classically the transmission mode of the symbiont can also be important in predicting where on the mutualism-parasitism-continuum an interaction will sit. Symbionts that are vertically transmitted (inherited symbionts) frequently occupy mutualism space on the continuum, this is due to the aligned reproductive interests between host and symbiont that are generated under vertical transmission. In some systems increases in the relative contribution of horizontal transmission can drive selection for parasitism. Studies of this hypothesis have focused on host-symbiont models of plants and fungi, and also of animals and microbes. See also Red King Hypothesis Red Queen Hypothesis Black Queen Hypothesis Biological interaction" https://en.wikipedia.org/wiki/List%20of%20inequalities,"This article lists Wikipedia articles about named mathematical inequalities. Inequalities in pure mathematics Analysis Agmon's inequality Askey–Gasper inequality Babenko–Beckner inequality Bernoulli's inequality Bernstein's inequality (mathematical analysis) Bessel's inequality Bihari–LaSalle inequality Bohnenblust–Hille inequality Borell–Brascamp–Lieb inequality Brezis–Gallouet inequality Carleman's inequality Chebyshev–Markov–Stieltjes inequalities Chebyshev's sum inequality Clarkson's inequalities Eilenberg's inequality Fekete–Szegő inequality Fenchel's inequality Friedrichs's inequality Gagliardo–Nirenberg interpolation inequality Gårding's inequality Grothendieck inequality Grunsky's inequalities Hanner's inequalities Hardy's inequality Hardy–Littlewood inequality Hardy–Littlewood–Sobolev inequality Harnack's inequality Hausdorff–Young inequality Hermite–Hadamard inequality Hilbert's inequality Hölder's inequality Jackson's inequality Jensen's inequality Khabibullin's conjecture on integral inequalities Kantorovich inequality Karamata's inequality Korn's inequality Ladyzhenskaya's inequality Landau–Kolmogorov inequality Lebedev–Milin inequality Lieb–Thirring inequality Littlewood's 4/3 inequality Markov brothers' inequality Mashreghi–Ransford inequality Max–min inequality Minkowski's inequality Poincaré inequality Popoviciu's inequality Prékopa–Leindler inequality Rayleigh–Faber–Krahn inequality Remez inequality Riesz rearrangement inequality Schur test Shapiro inequality Sobolev inequality Steffensen's inequality Szegő inequality Three spheres inequality Trace inequalities Trudinger's theorem Turán's inequalities Von Neumann's inequality Wirtinger's inequality for functions Young's convolution inequality Young's inequality for products Inequalities relating to means Hardy–Littlewood maximal inequality Inequality of arithmetic and geometric means Ky Fan inequality Levinson's inequality Mac" https://en.wikipedia.org/wiki/Multimedia%20over%20Coax%20Alliance,"The Multimedia over Coax Alliance (MoCA) is an international standards consortium that publishes specifications for networking over coaxial cable. The technology was originally developed to distribute IP television in homes using existing cabling, but is now used as a general-purpose Ethernet link where it is inconvenient or undesirable to replace existing coaxial cable with optical fiber or twisted pair cabling. MoCA 1.0 was approved in 2006, MoCA 1.1 in April 2010, MoCA 2.0 in June 2010, and MoCA 2.5 in April 2016. The most recently released version of the standard, MoCA 3.0, supports speeds of up to . Membership The Alliance currently has 45 members including pay TV operators, OEMs, CE manufacturers and IC vendors. MoCA's board of directors consists of Arris, Comcast, Cox Communications, DirecTV, Echostar, Intel, InCoax, MaxLinear and Verizon. Technology Within the scope of the Internet protocol suite, MoCA is a protocol that provides the link layer. In the 7-layer OSI model, it provides definitions within the data link layer (layer 2) and the physical layer (layer 1). DLNA approved of MoCA as a layer 2 protocol. A MoCA network can contain up to 16 nodes for MoCA 1.1 and higher, with a maximum of 8 for MoCA 1.0. The network provides a shared-medium, half-duplex link between all nodes using time-division multiplexing; within each timeslot, any pair of nodes communicates directly with each other using the highest mutually-supported version of the standard. Versions MoCA 1.0 The first version of the standard, MoCA 1.0, was ratified in 2006 and supports transmission speeds of up to 135 Mb/s. MoCA 1.1 MoCA 1.1 provides 175 Mbit/s net throughputs (275 Mbit/s PHY rate) and operates in the 500 to 1500 MHz frequency range. MoCA 2.0 MoCA 2.0 offers actual throughputs (MAC rate) up to 1 Gbit/s. Operating frequency range is 500 to 1650 MHz. Packet error rate is 1 packet error in 100 million. MoCA 2.0 also offers lower power modes of sleep and standby and is backw" https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Borwein%20constant,"The Erdős–Borwein constant is the sum of the reciprocals of the Mersenne numbers. It is named after Paul Erdős and Peter Borwein. By definition it is: Equivalent forms It can be proven that the following forms all sum to the same constant: where σ0(n) = d(n) is the divisor function, a multiplicative function that equals the number of positive divisors of the number n. To prove the equivalence of these sums, note that they all take the form of Lambert series and can thus be resummed as such. Irrationality Erdős in 1948 showed that the constant E is an irrational number. Later, Borwein provided an alternative proof. Despite its irrationality, the binary representation of the Erdős–Borwein constant may be calculated efficiently. Applications The Erdős–Borwein constant comes up in the average case analysis of the heapsort algorithm, where it controls the constant factor in the running time for converting an unsorted array of items into a heap." https://en.wikipedia.org/wiki/Autonomous%20decentralized%20system,"An autonomous decentralized system (or ADS) is a decentralized system composed of modules or components that are designed to operate independently but are capable of interacting with each other to meet the overall goal of the system. This design paradigm enables the system to continue to function in the event of component failures. It also enables maintenance and repair to be carried out while the system remains operational. Autonomous decentralized systems have a number of applications including industrial production lines, railway signalling and robotics. The ADS has been recently expanded from control applications to service application and embedded systems, thus autonomous decentralized service systems and autonomous decentralized device systems. History Autonomous decentralized systems were first proposed in 1977. ADS received significant attention as such systems have been deployed in Japanese railway systems for many years safely with over 7 billion trips, proving the value of this concept. Japan railway with ADS is considered as a smart train as it also learns. To recognizing this outstanding contribution, Dr. Kinji Mori has received numerous awards including 2013 IEEE Life Fellow, 2012 Distinguished Service Award, Tokyo Metropolitan Government, 2012 Distinguished Specialist among 1000 in the world, Chinese Government, 2008 IEICE Fellow, 1995 IEEE Fellow 1994 Research and Development Award of Excellence Achievers, Science and Technology Agency, 1994 Ichimura Industrial Prize, 1992 Technology Achievement Award, Society of Instrument and Control Engineers, 1988 National Patent Award, Science and Technology Agency, and 1988 Mainichi Technology Prize of Excellence. Dr. Mori donated the cash from Ichimura Industrial Price to IEEE to fund the IEEE Kanai Award. Since 1977, ADS has been a subject of research by many researchers in the world including US, Japan, EU particularly Germany, and China. ADS architecture An ADS is a decoupled architecture where each " https://en.wikipedia.org/wiki/Systems%20management,"Systems management refers to enterprise-wide administration of distributed systems including (and commonly in practice) computer systems. Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM. Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used: For a small business startup with ten computers, automated centralized processes may take more time to learn how to use and implement than just doing the management work manually on each computer. A very large business with thousands of similar employee computers may clearly be able to save time and money, by having IT staff learn to do systems management automation. A small branch office of a large corporation may have access to a central IT staff, with the experience to set up automated management of the systems in the branch office, without need for local staff in the branch office to do the work. Systems management may involve one or more of the following tasks: Hardware inventories. Server availability monitoring and metrics. Software inventory and installation. Anti-virus and anti-malware. User's activities monitoring. Capacity monitoring. Security management. Storage management. Network capacity and utilization monitoring. Anti-manipulation management Functions Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard. This framework is also known as Fault, Configuration, Accounting, Performance, Security (FCAPS). Fault management Troubleshooting, error logging an" https://en.wikipedia.org/wiki/Turbulence,"In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to a laminar flow, which occurs when a fluid flows in parallel layers, with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. This increases the energy needed to pump fluid through a pipe. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient s" https://en.wikipedia.org/wiki/IEC%2061108,"IEC 61108 is a collection of IEC standards for ""Maritime navigation and radiocommunication equipment and systems - Global navigation satellite systems (GNSS)"". The 61108 standards are developed in Working Group 4 (WG 4A) of Technical Committee 80 (TC80) of the IEC. Sections of IEC 61108 Standard IEC 61108 is divided into four parts: Part 1: Global positioning system (GPS) - Receiver equipment - Performance standards, methods of testing and required test results Part 2: Global navigation satellite system (GLONASS) - Receiver equipment - Performance standards, methods of testing and required test results Part 3: Galileo receiver equipment - Performance requirements, methods of testing and required test results Part 4: Part 4: Shipborne DGPS and DGLONASS maritime radio beacon receiver equipment - Performance requirements, methods of testing and required test results History On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the characteristics of shipped GNSS receivers. IMO Resolutions On 1 December 2000, the International Maritime Organization - IMO adopted three resolutions regarding the performance standards for shipborne GNSS receivers: IMO RESOLUTION MSC.112(73) GLOBAL POSITIONING SYSTEM (GPS) RECEIVER EQUIPMENT IMO RESOLUTION MSC.113(73) GLONASS RECEIVER EQUIPMENT IMO RESOLUTION MSC.114(73) DGPS AND DGLONASS MARITIME RADIO BEACON RECEIVER EQUIPMENT IMO RESOLUTION MSC.233(82) GALILEO RECEIVER EQUIPMENT (adopted on 5 December 2006)" https://en.wikipedia.org/wiki/Tomahawk%20%28geometry%29,"The tomahawk is a tool in geometry for angle trisection, the problem of splitting an angle into three equal parts. The boundaries of its shape include a semicircle and two line segments, arranged in a way that resembles a tomahawk, a Native American axe. The same tool has also been called the shoemaker's knife, but that name is more commonly used in geometry to refer to a different shape, the arbelos (a curvilinear triangle bounded by three mutually tangent semicircles). Description The basic shape of a tomahawk consists of a semicircle (the ""blade"" of the tomahawk), with a line segment the length of the radius extending along the same line as the diameter of the semicircle (the tip of which is the ""spike"" of the tomahawk), and with another line segment of arbitrary length (the ""handle"" of the tomahawk) perpendicular to the diameter. In order to make it into a physical tool, its handle and spike may be thickened, as long as the line segment along the handle continues to be part of the boundary of the shape. Unlike a related trisection using a carpenter's square, the other side of the thickened handle does not need to be made parallel to this line segment. In some sources a full circle rather than a semicircle is used, or the tomahawk is also thickened along the diameter of its semicircle, but these modifications make no difference to the action of the tomahawk as a trisector. Trisection To use the tomahawk to trisect an angle, it is placed with its handle line touching the apex of the angle, with the blade inside the angle, tangent to one of the two rays forming the angle, and with the spike touching the other ray of the angle. One of the two trisecting lines then lies on the handle segment, and the other passes through the center point of the semicircle. If the angle to be trisected is too sharp relative to the length of the tomahawk's handle, it may not be possible to fit the tomahawk into the angle in this way, but this difficulty may be worked around by re" https://en.wikipedia.org/wiki/Steered-response%20power,"Steered-response power (SRP) is a family of acoustic source localization algorithms that can be interpreted as a beamforming-based approach that searches for the candidate position or direction that maximizes the output of a steered delay-and-sum beamformer. Steered-response power with phase transform (SRP-PHAT) is a variant using a ""phase transform"" to make it more robust in adverse acoustic environments. Algorithm Steered-response power Consider a system of microphones, where each microphone is denoted by a subindex . The discrete-time output signal from a microphone is . The (unweighted) steered-response power (SRP) at a spatial point can be expressed as where denotes the set of integer numbers and would be the time-lag due to the propagation from a source located at to the -th microphone. The (weighted) SRP can be rewritten as where denotes complex conjugation, represents the discrete-time Fourier transform of and is a weighting function in the frequency domain (later discussed). The term is the discrete time-difference of arrival (TDOA) of a signal emitted at position to microphones and , given by where is the sampling frequency of the system, is the sound propagation speed, is the position of the -th microphone, is the 2-norm and denotes the rounding operator. Generalized cross-correlation The above SRP objective function can be expressed as a sum of generalized cross-correlations (GCCs) for the different microphone pairs at the time-lag corresponding to their TDOA where the GCC for a microphone pair is defined as The phase transform (PHAT) is an effective GCC weighting for time delay estimation in reverberant environments, that forces the GCC to consider only the phase information of the involved signals: Estimation of source location The SRP-PHAT algorithm consists in a grid-search procedure that evaluates the objective function on a grid of candidate source locations to estimate the spatial location of the sound source," https://en.wikipedia.org/wiki/List%20of%20Euclidean%20uniform%20tilings,"This table shows the 11 convex uniform tilings (regular and semiregular) of the Euclidean plane, and their dual tilings. There are three regular and eight semiregular tilings in the plane. The semiregular tilings form new tilings from their duals, each made from one type of irregular face. John Conway called these uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. Uniform tilings are listed by their vertex configuration, the sequence of faces that exist on each vertex. For example 4.8.8 means one square and two octagons on a vertex. These 11 uniform tilings have 32 different uniform colorings. A uniform coloring allows identical sided polygons at a vertex to be colored differently, while still maintaining vertex-uniformity and transformational congruence between vertices. (Note: Some of the tiling images shown below are not color-uniform) In addition to the 11 convex uniform tilings, there are also 14 known nonconvex tilings, using star polygons, and reverse orientation vertex configurations. A further 28 uniform tilings are known using apeirogons. If zigzags are also allowed, there are 23 more known uniform tilings and 10 more known families depending on a parameter: in 8 cases the parameter is continuous, and in the other 2 it is discrete. The set is not known to be complete. Laves tilings In the 1987 book, Tilings and Patterns, Branko Grünbaum calls the vertex-uniform tilings Archimedean, in parallel to the Archimedean solids. Their dual tilings are called Laves tilings in honor of crystallographer Fritz Laves. They're also called Shubnikov–Laves tilings after Aleksei Shubnikov. John Conway called the uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. The Laves tilings have vertices at the centers of the regular polygons, and edges connecting centers of regular polygons that share an edge. The tiles of the Laves tilings are called planigons. This includes the 3 regular tiles (triangle, square and hexagon) and" https://en.wikipedia.org/wiki/Thermoduric%20bacterium,"Thermoduric bacteria are bacteria which can survive, to varying extents, the pasteurisation process. Species of bacteria which are thermoduric include Bacillus, Clostridium and Enterococci." https://en.wikipedia.org/wiki/Virtual%20Interface%20Architecture,"The Virtual Interface Architecture (VIA) is an abstract model of a user-level zero-copy network, and is the basis for InfiniBand, iWARP and RoCE. Created by Microsoft, Intel, and Compaq, the original VIA sought to standardize the interface for high-performance network technologies known as System Area Networks (SANs; not to be confused with Storage Area Networks). Networks are a shared resource. With traditional network APIs such as the Berkeley socket API, the kernel is involved in every network communication. This presents a tremendous performance bottleneck when latency is an issue. One of the classic developments in computing systems is virtual memory, a combination of hardware and software that creates the illusion of private memory for each process. In the same school of thought, a virtual network interface protected across process boundaries could be accessed at the user level. With this technology, the ""consumer"" manages its own buffers and communication schedule while the ""provider"" handles the protection. Thus, the network interface card (NIC) provides a ""private network"" for a process, and a process is usually allowed to have multiple such networks. The virtual interface (VI) of VIA refers to this network and is merely the destination of the user's communication requests. Communication takes place over a pair of VIs, one on each of the processing nodes involved in the transmission. In ""kernel-bypass"" communication, the user manages its own buffers. Another facet of traditional networks is that arriving data is placed in a pre-allocated buffer and then copied to the user-specified final destination. Copying large messages can take a long time, and so eliminating this step is beneficial. Another classic development in computing systems is direct memory access (DMA), in which a device can access main memory directly while the CPU is free to perform other tasks. In a network with ""remote direct memory access"" (RDMA), the sending NIC uses DMA to read data" https://en.wikipedia.org/wiki/Prehensility,"Prehensility is the quality of an appendage or organ that has adapted for grasping or holding. The word is derived from the Latin term prehendere, meaning ""to grasp"". The ability to grasp is likely derived from a number of different origins. The most common are tree-climbing and the need to manipulate food. Examples Appendages that can become prehensile include: Uses Prehensility affords animals a great natural advantage in manipulating their environment for feeding, climbing, digging, and defense. It enables many animals, such as primates, to use tools to complete tasks that would otherwise be impossible without highly specialized anatomy. For example, chimpanzees have the ability to use sticks to obtain termites and grubs in a manner similar to human fishing. However, not all prehensile organs are applied to tool use; the giraffe tongue, for instance, is instead used in feeding and self-cleaning." https://en.wikipedia.org/wiki/Vyatta,"Vyatta is a software-based virtual router, virtual firewall and VPN product for Internet Protocol networks (IPv4 and IPv6). A free download of Vyatta has been available since March 2006. The system is a specialized Debian-based Linux distribution with networking applications such as Quagga, OpenVPN, and many others. A standardized management console, similar to Juniper JUNOS or Cisco IOS, in addition to a web-based GUI and traditional Linux system commands, provides configuration of the system and applications. In recent versions of Vyatta, web-based management interface is supplied only in the subscription edition. However, all functionality is available through KVM, serial console or SSH/telnet protocols. The software runs on standard x86-64 servers. Vyatta is also delivered as a virtual machine file and can provide (, , VPN) functionality for Xen, VMware, KVM, Rackspace, SoftLayer, and Amazon EC2 virtual and cloud computing environments. As of October, 2012, Vyatta has also been available through Amazon Marketplace and can be purchased as a service to provide VPN, cloud bridging and other network functions to users of Amazon's AWS services. Vyatta sells a subscription edition that includes all the functionality of the open source version as well as a graphical user interface, access to Vyatta's RESTful API's, Serial Support, TACACS+, Config Sync, System Image Cloning, software updates, 24x7 phone and email technical support, and training. Certification as a Vyatta Professional is now available. Vyatta also offers professional services and consulting engagements. The Vyatta system is intended as a replacement for Cisco IOS 1800 through ASR 1000 series Integrated Services Routers (ISR) and ASA 5500 security appliances, with a strong emphasis on the cost and flexibility inherent in an open source, Linux-based system running on commodity x86 hardware or in VMware ESXi, Microsoft Hyper-V, Citrix XenServer, Open Source Xen and KVM virtual environments. In 2012, Bro" https://en.wikipedia.org/wiki/Mathematics%20and%20God,"Connections between mathematics and God include the use of mathematics in arguments about the existence of God and about whether belief in God is beneficial. Mathematical arguments for God's existence In the 1070s, Anselm of Canterbury, an Italian medieval philosopher and theologian, created an ontological argument which sought to use logic to prove the existence of God. A more elaborate version was given by Gottfried Leibniz in the early eighteenth century. Kurt Gödel created a formalization of Leibniz' version, known as Gödel's ontological proof. A more recent argument was made by Stephen D. Unwin in 2003, who suggested the use of Bayesian probability to estimate the probability of God's existence. Mathematical arguments for belief A common application of decision theory to the belief in God is Pascal's wager, published by Blaise Pascal in his 1669 work Pensées. The application was a defense of Christianity stating that ""If God does not exist, the Atheist loses little by believing in him and gains little by not believing. If God does exist, the Atheist gains eternal life by believing and loses an infinite good by not believing"". The atheist's wager has been proposed as a counterargument to Pascal's Wager. See also Existence of God Further reading Cohen, Daniel J., Equations from God: Pure Mathematics and Victorian Faith, Johns Hopkins University Press, 2007 . Livio, Mario, Is God a Mathematician?, Simon & Schuster, 2011 . Ransford, H. Chris, God and the Mathematics of Infinity: What Irreducible Mathematics Says about Godhood, Columbia University Press, 2017 ." https://en.wikipedia.org/wiki/Geometry%20From%20Africa,"Geometry From Africa: Mathematical and Educational Explorations is a book in ethnomathematics by . It analyzes the mathematics behind geometric designs and patterns from multiple African cultures, and suggests ways of connecting this analysis with the mathematics curriculum. It was published in 1999 by the Mathematical Association of America, in their Classroom Resource Materials book series. Background The book's author, Paulus Gerdes (1952–2014), was a mathematician from the Netherlands who became a professor of mathematics at the Eduardo Mondlane University in Mozambique, rector of Maputo University, and chair of the African Mathematical Union Commission on the History of Mathematics in Africa. He was a prolific author, especially of works on the ethnomathematics of Africa. However, as many of his publications were written in Portuguese, German, and French, or published only in Mozambique, this book makes his work in ethnomathematics more accessible to English-speaking mathematicians. Topics The book is heavily illustrated, and describes geometric patterns in the carvings, textiles, drawings and paintings of multiple African cultures. Although these are primarily decorative rather than mathematical, Gerdes adds his own mathematical analysis of the patterns, and suggests ways of incorporating this analysis into the mathematical curriculum. It is divided into four chapters. The first of these provides an overview of geometric patterns in many African cultures, including examples of textiles, knotwork, architecture, basketry, metalwork, ceramics, petroglyphs, facial tattoos, body painting, and hair styles. The second chapter presents examples of designs in which squares and right triangles can be formed from elements of the patterns, and suggests educational activities connecting these materials to the Pythagorean theorem and to the theory of Latin squares. For instance, basket-weavers in Mozambique form square knotted buttons out of folded ribbons, and the resul" https://en.wikipedia.org/wiki/Analog%20signal%20processing,"Analog signal processing is a type of signal processing conducted on continuous analog signals by some analog means (as opposed to the discrete digital signal processing where the signal processing is carried out by a digital process). ""Analog"" indicates something that is mathematically represented as a set of continuous values. This differs from ""digital"" which uses a series of discrete quantities to represent signal. Analog values are typically represented as a voltage, electric current, or electric charge around components in the electronic devices. An error or noise affecting such physical quantities will result in a corresponding error in the signals represented by such physical quantities. Examples of analog signal processing include crossover filters in loudspeakers, ""bass"", ""treble"" and ""volume"" controls on stereos, and ""tint"" controls on TVs. Common analog processing elements include capacitors, resistors and inductors (as the passive elements) and transistors or opamps (as the active elements). Tools used in analog signal processing A system's behavior can be mathematically modeled and is represented in the time domain as h(t) and in the frequency domain as H(s), where s is a complex number in the form of s=a+ib, or s=a+jb in electrical engineering terms (electrical engineers use ""j"" instead of ""i"" because current is represented by the variable i). Input signals are usually called x(t) or X(s) and output signals are usually called y(t) or Y(s). Convolution Convolution is the basic concept in signal processing that states an input signal can be combined with the system's function to find the output signal. It is the integral of the product of two waveforms after one has reversed and shifted; the symbol for convolution is *. That is the convolution integral and is used to find the convolution of a signal and a system; typically a = -∞ and b = +∞. Consider two waveforms f and g. By calculating the convolution, we determine how much a reversed functio" https://en.wikipedia.org/wiki/Cepstrum,"In Fourier analysis, the cepstrum (; plural cepstra, adjective cepstral) is the result of computing the inverse Fourier transform (IFT) of the logarithm of the estimated signal spectrum. The method is a tool for investigating periodic structures in frequency spectra. The power cepstrum has applications in the analysis of human speech. The term cepstrum was derived by reversing the first four letters of spectrum. Operations on cepstra are labelled quefrency analysis (or quefrency alanysis), liftering, or cepstral analysis. It may be pronounced in the two ways given, the second having the advantage of avoiding confusion with kepstrum. Origin The concept of the cepstrum was introduced in 1963 by B. P. Bogert, M. J. Healy, and J. W. Tukey. It serves as a tool to investigate periodic structures in frequency spectra. Such effects are related to noticeable echos or reflections in the signal, or to the occurrence of harmonic frequencies (partials, overtones). Mathematically it deals with the problem of deconvolution of signals in the frequency space." https://en.wikipedia.org/wiki/Institution%20of%20Electronics%20and%20Telecommunication%20Engineers,"The Institution of Electronics and Telecommunication Engineers (IETE) is India's leading recognized professional society devoted to the advancement of science, technology, electronics, telecommunication and information technology. Founded in 1953, it serves more than 70,000+ members through 60+ centers/sub centers primarily located in India (3 abroad). The Institution provides leadership in scientific and technical areas of direct importance to the national development and economy. Association of Indian Universities (AIU), Union Public Service Commission (UPSC) has recognized AMIETE, ALCCS (Advanced Level Course in Computer Science). Government of India has recognized IETE as a Scientific and Industrial Research Organization (SIRO) and also notified as an educational institution of national eminence. The IETE focuses on advancement of electronics and telecommunication technology. The IETE conducts and sponsors technical meetings, conferences, symposium, and exhibitions all over India, publishes technical and research journals and provides continuing education as well as career advancement opportunities to its members. IETE today is one of the prominent technical institution to provide education to working professionals in India and is fast expanding its wings across the country through its 60+ centres. Since 1953, IETE has expanded its educational activities in areas of electronics, telecommunications, computer science and information technology. IETE conduct programs by examination, leading to DipIETE equivalent to Diploma in Engineering, AMIETE equivalent to B Tech, and ALCCS equivalent to M Tech. IETE started Dual Degree, Dual Diploma and Integrated programs in December 2011. DipIETE is a three year, six semester course whereas AMIETE is a four year, eight semester course. IETE conducts examination for the above said courses, twice a year once in June and in December. Courses are divided into two sections, Section A and Section B. Courses of IETE are recognized " https://en.wikipedia.org/wiki/Ohm%27s%20law,"Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points. Introducing the constant of proportionality, the resistance, one arrives at the three mathematical equations used to describe this relationship: where is the current through the conductor, V is the voltage measured across the conductor and R is the resistance of the conductor. More specifically, Ohm's law states that the R in this relation is constant, independent of the current. If the resistance is not constant, the previous equation cannot be called Ohm's law, but it can still be used as a definition of static/DC resistance. Ohm's law is an empirical relation which accurately describes the conductivity of the vast majority of electrically conductive materials over many orders of magnitude of current. However some materials do not obey Ohm's law; these are called non-ohmic. The law was named after the German physicist Georg Ohm, who, in a treatise published in 1827, described measurements of applied voltage and current through simple electrical circuits containing various lengths of wire. Ohm explained his experimental results by a slightly more complex equation than the modern form above (see below). In physics, the term Ohm's law is also used to refer to various generalizations of the law; for example the vector form of the law used in electromagnetics and material science: where J is the current density at a given location in a resistive material, E is the electric field at that location, and σ (sigma) is a material-dependent parameter called the conductivity. This reformulation of Ohm's law is due to Gustav Kirchhoff. History In January 1781, before Georg Ohm's work, Henry Cavendish experimented with Leyden jars and glass tubes of varying diameter and length filled with salt solution. He measured the current by noting how strong a shock he felt as he completed the circuit with his body. Cavendish wrote that the " https://en.wikipedia.org/wiki/Physical%20theories%20modified%20by%20general%20relativity,"This article will use the Einstein summation convention. The theory of general relativity required the adaptation of existing theories of physical, electromagnetic, and quantum effects to account for non-Euclidean geometries. These physical theories modified by general relativity are described below. Classical mechanics and special relativity Classical mechanics and special relativity are lumped together here because special relativity is in many ways intermediate between general relativity and classical mechanics, and shares many attributes with classical mechanics. In the following discussion, the mathematics of general relativity is used heavily. Also, under the principle of minimal coupling, the physical equations of special relativity can be turned into their general relativity counterparts by replacing the Minkowski metric (ηab) with the relevant metric of spacetime (gab) and by replacing any partial derivatives with covariant derivatives. In the discussions that follow, the change of metrics is implied. Inertia Inertial motion is motion free of all forces. In Newtonian mechanics, the force F acting on a particle with mass m is given by Newton's second law, , where the acceleration is given by the second derivative of position r with respect to time t . Zero force means that inertial motion is just motion with zero acceleration: The idea is the same in special relativity. Using Cartesian coordinates, inertial motion is described mathematically as: where is the position coordinate and τ is proper time. (In Newtonian mechanics, τ ≡ t, the coordinate time). In both Newtonian mechanics and special relativity, space and then spacetime are assumed to be flat, and we can construct a global Cartesian coordinate system. In general relativity, these restrictions on the shape of spacetime and on the coordinate system to be used are lost. Therefore, a different definition of inertial motion is required. In relativity, inertial motion occurs along timelike or null " https://en.wikipedia.org/wiki/Unification%20of%20theories%20in%20physics,"Unification of theories about observable fundamental phenomena of nature is one of the primary goals of physics. The two great unifications to date are Isaac Newton’s unification of gravity and astronomy, and James Clerk Maxwell’s unification of electromagnetism; the latter has been further unified with the concept of electroweak interaction. This process of ""unifying"" forces continues today, with the ultimate goal of finding a theory of everything. Unification of gravity and astronomy The ""first great unification"" was Isaac Newton's 17th century unification of gravity, which brought together the understandings of the observable phenomena of gravity on Earth with the observable behaviour of celestial bodies in space. Unification of magnetism, electricity, light and related radiation The ancient Chinese observed that certain rocks (lodestone and magnetite) were attracted to one another by an invisible force. This effect was later called magnetism, which was first rigorously studied in the 17th century. But even before the Chinese discovered magnetism, the ancient Greeks knew of other objects such as amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century and came to be called electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force—electromagnetism. The ""second great unification"" was James Clerk Maxwell's 19th century unification of electromagnetism. It brought together the understandings of the observable phenomena of magnetism, electricity and light (and more broadly, the spectrum of electromagnetic radiation). This was followed in the 20th century by Albert Einstein's unification of space and time, and of mass and energy. Later, quantum field theory unified quantum mechanics" https://en.wikipedia.org/wiki/Luca%20Turin,"Luca Turin (born 20 November 1953) is a biophysicist and writer with a long-standing interest in bioelectronics, the sense of smell, perfumery, and the fragrance industry. Early life and education Turin was born in Beirut, Lebanon on 20 November 1953 into an Italian-Argentinian family, and raised in France, Italy and Switzerland. His father, Duccio Turin, was a UN diplomat and chief architect of the Palestinian refugee camps, and his mother, Adela Turin (born Mandelli), is an art historian, designer, and award-winning children's author. Turin studied Physiology and Biophysics at University College London and earned his PhD in 1978. He worked at the CNRS from 1982-1992, and served as lecturer in Biophysics at University College London from 1992-2000. Career After leaving the CNRS, Turin first held a visiting research position at the National Institutes of Health in North Carolina before moving back to London, where he became a lecturer in biophysics at University College London. In 2001 Turin was hired as CTO of start-up company Flexitral, based in Chantilly, Virginia, to pursue rational odorant design based on his theories. In April 2010 he described this role in the past tense, and the company's domain name appears to have been surrendered. In 2010, Turin was based at MIT, working on a project to develop an electronic nose using natural receptors, financed by DARPA. In 2014 he moved to the Institute of Theoretical Physics at the University of Ulm where he was a Visiting Professor. He is a Stavros Niarchos Researcher in the neurobiology division at the Biomedical Sciences Research Center Alexander Fleming in Greece. In 2021 he moved to the University of Buckingham, UK as Professor of Physiology in the Medical School. Vibration theory of olfaction A major prediction of Turin's vibration theory of olfaction is the isotope effect: that the normal and deuterated versions of a compound should smell different due to unique vibration frequencies, despite having the" https://en.wikipedia.org/wiki/Periodic%20summation,"In mathematics, any integrable function can be made into a periodic function with period P by summing the translations of the function by integer multiples of P. This is called periodic summation: When is alternatively represented as a Fourier series, the Fourier coefficients are equal to the values of the continuous Fourier transform, at intervals of . That identity is a form of the Poisson summation formula. Similarly, a Fourier series whose coefficients are samples of at constant intervals (T) is equivalent to a periodic summation of which is known as a discrete-time Fourier transform. The periodic summation of a Dirac delta function is the Dirac comb. Likewise, the periodic summation of an integrable function is its convolution with the Dirac comb. Quotient space as domain If a periodic function is instead represented using the quotient space domain then one can write: The arguments of are equivalence classes of real numbers that share the same fractional part when divided by . Citations See also Dirac comb Circular convolution Discrete-time Fourier transform Functions and mappings Signal processing" https://en.wikipedia.org/wiki/Universal%20gateway,"A universal gateway is a device that transacts data between two or more data sources using communication protocols specific to each. Sometimes called a universal protocol gateway, this class of product is designed as a computer appliance, and is used to connect data from one automation system to another. Typical applications Typical applications include: M2M Communications – machine to machine communications between machines from different vendors, typically using different communication protocols. This is often a requirement to optimize the performance of a production line, by effectively communicating machine states upstream and downstream of a piece of equipment. Machine idle times can trigger lower power operation. Inventory Levels can be more effectively managed on a per station basis, by knowing the upstream and downstream demands. M2E Communications – machine to enterprise communications is typically managed through database interactions. In this case, EATM technology is typically leveraged for data interoperability. However, many enterprise systems have real-time data interfaces. When real-time interfaces are involved, a universal gateway, with its ability to support many protocols simultaneously becomes the best choice. In all cases, communications can fall over many different transports, RS-232, RS-485, Ethernet, etc. Universal Gateways have the ability to communicate between protocols and over different transports simultaneously. Design Hardware platform – Industrial Computer, Embedded Computer, Computer Appliance Communications software – Software (Drivers) to support one or more Industrial Protocols. Communications is typically polled or change based. Great care is typically taken to leverage communication protocols for the most efficient transactions of data (Optimized message sizes, communications speeds, and data update rates). Typical protocols; Rockwell Automation CIP, Ethernet/IP, Siemens Industrial Ethernet, Modbus TCP. There " https://en.wikipedia.org/wiki/Differentiation%20rules,"This is a summary of differentiation rules, that is, rules for computing the derivative of a function in calculus. Elementary rules of differentiation Unless otherwise stated, all functions are functions of real numbers (R) that return real values; although more generally, the formulae below apply wherever they are well defined — including the case of complex numbers (C). Constant term rule For any value of , where , if is the constant function given by , then . Proof Let and . By the definition of the derivative, This shows that the derivative of any constant function is 0. Intuitive (geometric) explanation The derivative of the function at a point is the slope of the line tangent to the curve at the point. Slope of the constant function is zero, because the tangent line to the constant function is horizontal and it's angle is zero. In other words, the value of the constant function, y, will not change as the value of x increases or decreases. Differentiation is linear For any functions and and any real numbers and , the derivative of the function with respect to is: In Leibniz's notation this is written as: Special cases include: The constant factor rule The sum rule The difference rule The product rule For the functions f and g, the derivative of the function h(x) = f(x) g(x) with respect to x is In Leibniz's notation this is written The chain rule The derivative of the function is In Leibniz's notation, this is written as: often abridged to Focusing on the notion of maps, and the differential being a map , this is written in a more concise way as: The inverse function rule If the function has an inverse function , meaning that and then In Leibniz notation, this is written as Power laws, polynomials, quotients, and reciprocals The polynomial or elementary power rule If , for any real number then When this becomes the special case that if then Combining the power rule with the sum and constant multiple rules permit" https://en.wikipedia.org/wiki/ITIL%20security%20management,"ITIL security management describes the structured fitting of security into an organization. ITIL security management is based on the ISO 27001 standard. ""ISO/IEC 27001:2005 covers all types of organizations (e.g. commercial enterprises, government agencies, not-for profit organizations). ISO/IEC 27001:2005 specifies the requirements for establishing, implementing, operating, monitoring, reviewing, maintaining and improving a documented Information Security Management System within the context of the organization's overall business risks. It specifies requirements for the implementation of security controls customized to the needs of individual organizations or parts thereof. ISO/IEC 27001:2005 is designed to ensure the selection of adequate and proportionate security controls that protect information assets and give confidence to interested parties."" A basic concept of security management is information security. The primary goal of information security is to control access to information. The value of the information is what must be protected. These values include confidentiality, integrity and availability. Inferred aspects are privacy, anonymity and verifiability. The goal of security management comes in two parts: Security requirements defined in service level agreements (SLA) and other external requirements that are specified in underpinning contracts, legislation and possible internal or external imposed policies. Basic security that guarantees management continuity. This is necessary to achieve simplified service-level management for information security. SLAs define security requirements, along with legislation (if applicable) and other contracts. These requirements can act as key performance indicators (KPIs) that can be used for process management and for interpreting the results of the security management process. The security management process relates to other ITIL-processes. However, in this particular section the most obvious relations are the " https://en.wikipedia.org/wiki/Index%20of%20logarithm%20articles,"This is a list of logarithm topics, by Wikipedia page. See also the list of exponential topics. Acoustic power Antilogarithm Apparent magnitude Baker's theorem Bel Benford's law Binary logarithm Bode plot Henry Briggs Bygrave slide rule Cologarithm Common logarithm Complex logarithm Discrete logarithm Discrete logarithm records e Representations of e El Gamal discrete log cryptosystem Harmonic series History of logarithms Hyperbolic sector Iterated logarithm Otis King Law of the iterated logarithm Linear form in logarithms Linearithmic List of integrals of logarithmic functions Logarithmic growth Logarithmic timeline Log-likelihood ratio Log-log graph Log-normal distribution Log-periodic antenna Log-Weibull distribution Logarithmic algorithm Logarithmic convolution Logarithmic decrement Logarithmic derivative Logarithmic differential Logarithmic differentiation Logarithmic distribution Logarithmic form Logarithmic graph paper Logarithmic growth Logarithmic identities Logarithmic number system Logarithmic scale Logarithmic spiral Logarithmic timeline Logit LogSumExp Mantissa is a disambiguation page; see common logarithm for the traditional concept of mantissa; see significand for the modern concept used in computing. Matrix logarithm Mel scale Mercator projection Mercator series Moment magnitude scale John Napier Napierian logarithm Natural logarithm Natural logarithm of 2 Neper Offset logarithmic integral pH Pollard's kangaroo algorithm Pollard's rho algorithm for logarithms Polylogarithm Polylogarithmic function Prime number theorem Richter magnitude scale Grégoire de Saint-Vincent Alphonse Antonio de Sarasa Schnorr signature Semi-log graph Significand Slide rule Smearing retransformation Sound intensity level Super-logarithm Table of logarithms Weber-Fechner law Exponentials Logarithm topics" https://en.wikipedia.org/wiki/System%20analysis,"System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems. Characterization of systems A system is characterized by how it responds to input signals. In general, a system has one or more input signals and one or more output signals. Therefore, one natural characterization of systems is by how many inputs and outputs they have: SISO (Single Input, Single Output) SIMO (Single Input, Multiple Outputs) MISO (Multiple Inputs, Single Output) MIMO (Multiple Inputs, Multiple Outputs) It is often useful (or necessary) to break up a system into smaller pieces for analysis. Therefore, we can regard a SIMO system as multiple SISO systems (one for each output), and similarly for a MIMO system. By far, the greatest amount of work in system analysis has been with SISO systems, although many parts inside SISO systems have multiple inputs (such as adders). Signals can be continuous or discrete in time, as well as continuous or discrete in the values they take at any given time: Signals that are continuous in time and continuous in value are known as analog signals. Signals that are discrete in time and discrete in value are known as digital signals. Signals that are discrete in time and continuous in value are called discrete-time signals. Switched capacitor systems, for instance, are often used in integrated circuits. The methods developed for analyzing discrete time signals and systems are usually applied to digital and analog signals and systems. Signals that are continuous in time and discrete in value are sometimes seen in the timing analysis of logic circuits or PWM amplifiers, but have little to no use in system analysis. With this categ" https://en.wikipedia.org/wiki/LwIP,"lwIP (lightweight IP) is a widely used open-source TCP/IP stack designed for embedded systems. lwIP was originally developed by Adam Dunkels at the Swedish Institute of Computer Science and is now developed and maintained by a worldwide network of developers. lwIP is used by many manufacturers of embedded systems, including Intel/Altera, Analog Devices, Xilinx, TI, ST and Freescale. lwIP network stack The focus of the lwIP network stack implementation is to reduce resource usage while still having a full-scale TCP stack. This makes lwIP suitable for use in embedded systems with tens of kilobytes of free RAM and room for around 40 kilobytes of code ROM. lwIP protocol implementations Aside from the TCP/IP stack, lwIP has several other important parts, such as a network interface, an operating system emulation layer, buffers and a memory management section. The operating system emulation layer and the network interface allow the network stack to be transplanted into an operating system, as it provides a common interface between lwIP code and the operating system kernel. The network stack of lwIP includes an IP (Internet Protocol) implementation at the Internet layer that can handle packet forwarding over multiple network interfaces. Both IPv4 and IPv6 are supported dual stack since lwIP v2.0.0 . For network maintenance and debugging, lwIP implements ICMP (Internet Control Message Protocol). IGMP (Internet Group Management Protocol) is supported for multicast traffic management. While ICMPv6 (including MLD) is implemented to support the use of IPv6. lwIP includes an implementation of IPv4 ARP (Address Resolution Protocol) and IPv6 Neighbor Discovery Protocol to support Ethernet at the data link layer. lwIP may also be operated on top of a PPP (Point-to-Point Protocol) implementation at the data link layer. At the transport layer lwIP implements TCP (Transmission Control Protocol) with congestion control, RTT estimation and fast recovery/fast retransmit. UDP (U" https://en.wikipedia.org/wiki/Spark%20%28mathematics%29,"In mathematics, more specifically in linear algebra, the spark of a matrix is the smallest integer such that there exists a set of columns in which are linearly dependent. If all the columns are linearly independent, is usually defined to be 1 more than the number of rows. The concept of matrix spark finds applications in error-correction codes, compressive sensing, and matroid theory, and provides a simple criterion for maximal sparsity of solutions to a system of linear equations. The spark of a matrix is NP-hard to compute. Definition Formally, the spark of a matrix is defined as follows: where is a nonzero vector and denotes its number of nonzero coefficients ( is also referred to as the size of the support of a vector). Equivalently, the spark of a matrix is the size of its smallest circuit (a subset of column indices such that has a nonzero solution, but every subset of it does not). If all the columns are linearly independent, is usually defined to be (if has m rows). By contrast, the rank of a matrix is the largest number such that some set of columns of is linearly independent. Example Consider the following matrix . The spark of this matrix equals 3 because: There is no set of 1 column of which are linearly dependent. There is no set of 2 columns of which are linearly dependent. But there is a set of 3 columns of which are linearly dependent. The first three columns are linearly dependent because . Properties If , the following simple properties hold for the spark of a matrix : (If the spark equals , then the matrix has full rank.) if and only if the matrix has a zero column. . Criterion for uniqueness of sparse solutions The spark yields a simple criterion for uniqueness of sparse solutions of linear equation systems. Given a linear equation system . If this system has a solution that satisfies , then this solution is the sparsest possible solution. Here denotes the number of nonzero entries of the vector . Lower bo" https://en.wikipedia.org/wiki/Thermal%20design%20power,"The thermal design power (TDP), sometimes called thermal design point, is the maximum amount of heat generated by a computer chip or component (often a CPU, GPU or system on a chip) that the cooling system in a computer is designed to dissipate under any workload. Some sources state that the peak power rating for a microprocessor is usually 1.5 times the TDP rating. Intel has introduced a new metric called scenario design power (SDP) for some Ivy Bridge Y-series processors. Calculation The average CPU power (ACP) is the power consumption of central processing units, especially server processors, under ""average"" daily usage as defined by Advanced Micro Devices (AMD) for use in its line of processors based on the K10 microarchitecture (Opteron 8300 and 2300 series processors). Intel's thermal design power (TDP), used for Pentium and Core 2 processors, measures the energy consumption under high workload; it is numerically somewhat higher than the ""average"" ACP rating of the same processor. According to AMD the ACP rating includes the power consumption when running several benchmarks, including TPC-C, SPECcpu2006, SPECjbb2005 and STREAM Benchmark (memory bandwidth), which AMD said is an appropriate method of power consumption measurement for data centers and server-intensive workload environments. AMD said that the ACP and TDP values of the processors will both be stated and do not replace one another. Barcelona and later server processors have the two power figures. The TDP of a CPU has been underestimated in some cases, leading to certain real applications (typically strenuous, such as video encoding or games) causing the CPU to exceed its specified TDP and resulting in overloading the computer's cooling system. In this case, CPUs either cause a system failure (a ""therm-trip"") or throttle their speed down. Most modern processors will cause a therm-trip only upon a catastrophic cooling failure, such as a no longer operational fan or an incorrectly mounted hea" https://en.wikipedia.org/wiki/Convergence%20research,"Convergence research aims to solve complex problems employing transdisciplinarity. While academic disciplines are useful for identifying and conveying coherent bodies of knowledge, some problems require collaboration among disciplines, including both enhanced understanding of scientific phenomena as well as resolving social issues. The two defining characteristics of convergence research include: 1) the nature of the problem, and 2) the collaboration among disciplines. Definition In 2016, convergence research was identified by the National Science Foundation as one of 10 Big Idea's for future investments. As defined by NSF, convergence research has two primary characteristics, namely: ""Research driven by a specific and compelling problem. Convergence research is generally inspired by the need to address a specific challenge or opportunity, whether it arises from deep scientific questions or pressing societal needs. Deep integration across disciplines. As experts from different disciplines pursue common research challenges, their knowledge, theories, methods, data, research communities and languages become increasingly intermingled or integrated. New frameworks, paradigms or even disciplines can form sustained interactions across multiple communities."" Examples of convergence research Biomedicine Advancing healthcare and promoting wellness to the point of providing personalized medicine will increase health and reduce costs for everyone. While recognizing the potential benefits of personalized medicine, critics cite the importance of maintaining investments in public health as highlighted by the approaches to combat the COVID-19 pandemic. Cyber-physical systems The internet of things allows all people, machines, and infrastructure to be monitored, maintained, and operated in real-time, everywhere. Because the United States Government is one of the largest user of ""things"", cybersecurity is critical to any effective system. STEMpathy Jobs that utilize skil" https://en.wikipedia.org/wiki/Directional%20symmetry%20%28time%20series%29,"In statistical analysis of time series and in signal processing, directional symmetry is a statistical measure of a model's performance in predicting the direction of change, positive or negative, of a time series from one time period to the next. Definition Given a time series with values at times and a model that makes predictions for those values , then the directional symmetry (DS) statistic is defined as Interpretation The DS statistic gives the percentage of occurrences in which the sign of the change in value from one time period to the next is the same for both the actual and predicted time series. The DS statistic is a measure of the performance of a model in predicting the direction of value changes. The case would indicate that a model perfectly predicts the direction of change of a time series from one time period to the next. See also Statistical finance Notes and references Drossu, Radu, and Zoran Obradovic. ""INFFC data analysis: lower bounds and testbed design recommendations."" Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997. IEEE, 1997. Lawrance, A. J., ""Directionality and Reversibility in Time Series"", International Statistical Review, 59 (1991), 67–79. Tay, Francis EH, and Lijuan Cao. ""Application of support vector machines in financial time series forecasting."" Omega 29.4 (2001): 309–317. Xiong, Tao, Yukun Bao, and Zhongyi Hu. ""Beyond one-step-ahead forecasting: Evaluation of alternative multi-step-ahead forecasting models for crude oil prices."" Energy Economics 40 (2013): 405–415. Symmetry Signal processing" https://en.wikipedia.org/wiki/TNet,"TNet is a secure top-secret-level intranet system in the White House, notably used to record information about telephone and video calls between the President of the United States and other world leaders. TNet is connected to Joint Worldwide Intelligence Communications System (JWICS), which is used more widely across different offices in the White House. Contained within TNet is an even more secure system known as NSC Intelligence Collaboration Environment (NICE). NSC Intelligence Collaboration Environment The NSC Intelligence Collaboration Environment (NICE) is a computer system operated by the United States National Security Council's Directorate for Intelligence Programs. A subdomain of TNet, it was created to enable staff to produce and store documents, such as presidential findings or decision memos, on top secret codeword activities. Due to the extreme sensitivity of the material held on it, only about 20 percent of NSC staff can reportedly access the system. The documents held on the system are tightly controlled and only specific named staff are able to access files. The system became the subject of controversy during the Trump–Ukraine scandal, when a whistleblower complaint to the Inspector General of the Intelligence Community revealed that NICE had been used to store transcripts of calls between President Donald Trump, and foreign leaders, apparently to restrict access to them. The system was reportedly used for this purpose from 2017 after leaks of conversations with foreign leaders. It was said to have been upgraded in the spring of 2018 to log, who had accessed particular files, as a deterrent against possible leaks. See also Classified website Intellipedia Joint Worldwide Intelligence Communications System (JWICS) NIPRNet RIPR SIPRNet" https://en.wikipedia.org/wiki/Invention%20of%20the%20integrated%20circuit,"The first planar monolithic integrated circuit (IC) chip was demonstrated in 1960. The idea of integrating electronic circuits into a single device was born when the German physicist and engineer Werner Jacobi developed and patented the first known integrated transistor amplifier in 1949 and the British radio engineer Geoffrey Dummer proposed to integrate a variety of standard electronic components in a monolithic semiconductor crystal in 1952. A year later, Harwick Johnson filed a patent for a prototype IC. Between 1953 and 1957, Sidney Darlington and Yasuo Tarui (Electrotechnical Laboratory) proposed similar chip designs where several transistors could share a common active area, but there was no electrical isolation to separate them from each other. These ideas could not be implemented by the industry, until a breakthrough came in late 1958. Three people from three U.S. companies solved three fundamental problems that hindered the production of integrated circuits. Jack Kilby of Texas Instruments patented the principle of integration, created the first prototype ICs and commercialized them. Kilby's invention was a hybrid integrated circuit (hybrid IC), rather than a monolithic integrated circuit (monolithic IC) chip. Between late 1958 and early 1959, Kurt Lehovec of Sprague Electric Company developed a way to electrically isolate components on a semiconductor crystal, using p–n junction isolation. The first monolithic IC chip was invented by Robert Noyce of Fairchild Semiconductor. He invented a way to connect the IC components (aluminium metallization) and proposed an improved version of insulation based on the planar process technology developed by Jean Hoerni. On September 27, 1960, using the ideas of Noyce and Hoerni, a group of Jay Last's at Fairchild Semiconductor created the first operational semiconductor IC. Texas Instruments, which held the patent for Kilby's invention, started a patent war, which was settled in 1966 by the agreement on cross-licensin" https://en.wikipedia.org/wiki/Biological%20tests%20of%20necessity%20and%20sufficiency,"Biological tests of necessity and sufficiency refer to experimental methods and techniques that seek to test or provide evidence for specific kinds of causal relationships in biological systems. A necessary cause is one without which it would be impossible for an effect to occur, while a sufficient cause is one whose presence guarantees the occurrence of an effect. These concepts are largely based on but distinct from ideas of necessity and sufficiency in logic. Tests of necessity, among which are methods of lesioning or gene knockout, and tests of sufficiency, among which are methods of isolation or discrete stimulation of factors, have become important in current-day experimental designs, and application of these tests have led to a number of notable discoveries and findings in the biological sciences. Definitions In biological research, experiments or tests are often used to study predicted causal relationships between two phenomena. These causal relationships may be described in terms of the logical concepts of necessity and sufficiency. Consider the statement that a phenomenon x causes a phenomenon y. X would be a necessary cause of y when the occurrence of y implies that x needed to have occurred. However, only the occurrence of the necessary condition x may not always result in y also occurring. In other words, when some factor is necessary to cause an effect, it is impossible to have the effect without the cause. X would instead be a sufficient cause of y when the occurrence of x implies that y must then occur. in other words, when some factor is sufficient to cause an effect, the presence of the cause guarantees the occurrence of the effect. However, a different cause z may also cause y, meaning that y may occur without x occurring. For a concrete example, consider the conditional statement ""if an object is a square, then it has four sides"". It is a necessary condition that an object has four sides if it is true that it is a square; conversely, the obj" https://en.wikipedia.org/wiki/Outline%20of%20probability,"Probability is a measure of the likeliness that an event will occur. Probability is used to quantify an attitude of mind towards some proposition whose truth is not certain. The proposition of interest is usually of the form ""A specific event will occur."" The attitude of mind is of the form ""How certain is it that the event will occur?"" The certainty that is adopted can be described in terms of a numerical measure, and this number, between 0 and 1 (where 0 indicates impossibility and 1 indicates certainty) is called the probability. Probability theory is used extensively in statistics, mathematics, science and philosophy to draw conclusions about the likelihood of potential events and the underlying mechanics of complex systems. Introduction Probability and randomness. Basic probability (Related topics: set theory, simple theorems in the algebra of sets) Events Events in probability theory Elementary events, sample spaces, Venn diagrams Mutual exclusivity Elementary probability The axioms of probability Boole's inequality Meaning of probability Probability interpretations Bayesian probability Frequency probability Calculating with probabilities Conditional probability The law of total probability Bayes' theorem Independence Independence (probability theory) Probability theory (Related topics: measure theory) Measure-theoretic probability Sample spaces, σ-algebras and probability measures Probability space Sample space Standard probability space Random element Random compact set Dynkin system Probability axioms Event (probability theory) Complementary event Elementary event ""Almost surely"" Independence Independence (probability theory) The Borel–Cantelli lemmas and Kolmogorov's zero–one law Conditional probability Conditional probability Conditioning (probability) Conditional expectation Conditional probability distribution Regular conditional probability Disintegration theorem Bayes' theorem Rule of succession Condition" https://en.wikipedia.org/wiki/Formulario%20mathematico,"Formulario Mathematico (Latino sine flexione: Formulary for Mathematics) is a book by Giuseppe Peano which expresses fundamental theorems of mathematics in a symbolic language developed by Peano. The author was assisted by Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti. The Formulario was first published in 1894. The fifth and last edition was published in 1908. Hubert Kennedy wrote ""the development and use of mathematical logic is the guiding motif of the project"". He also explains the variety of Peano's publication under the title: the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario! Peano believed that students needed only precise statement of their lessons. He wrote: Each professor will be able to adopt this Formulario as a textbook, for it ought to contain all theorems and all methods. His teaching will be reduced to showing how to read the formulas, and to indicating to the students the theorems that he wishes to explain in his course. Such a dismissal of the oral tradition in lectures at universities was the undoing of Peano's own teaching career. Notes" https://en.wikipedia.org/wiki/Signal%20chain,"Signal chain, or signal-processing chain is a term used in signal processing and mixed-signal system design to describe a series of signal-conditioning electronic components that receive input (data acquired from sampling either real-time phenomena or from stored data) sequentially, with the output of one portion of the chain supplying input to the next. Signal chains are often used in signal processing applications to gather and process data or to apply system controls based on analysis of real-time phenomena. Definition This definition comes from common usage in the electronics industry and can be derived from definitions of its parts: Signal: ""The event, phenomenon, or electrical quantity, that conveys information from one point to another"". Chain: ""1. Any series of items linked together. 2. Pertaining to a routine consisting of segments which are run through the computer in tandem, only one segment being within the computer at any one time and each segment using the output from the previous program as its input"". The concept of a signal chain is familiar to electrical engineers, but the term has many synonyms such as circuit topology. The goal of any signal chain is to process a variety of signals to monitor or control an analog-, digital-, or analog-digital system. See also Audio signal flow Daisy chain (electrical engineering) Feedback" https://en.wikipedia.org/wiki/Phoenix%20network%20coordinates,"Phoenix is a decentralized network coordinate system based on the matrix factorization model. Background Network coordinate (NC) systems are an efficient mechanism for internet distance (round-trip latency) prediction with scalable measurements. For a network with N hosts, by performing O(N) measurements, all N*N distances can be predicted. Use cases: Vuze BitTorrent, application layer multicast, PeerWise overlay, multi-player online gaming. Triangle inequality violation (TIV) is widely exist on the Internet due to the current sub-optimal internet routing. Model Most of the prior NC systems use the Euclidean distance model, i.e. embed N hosts into a d-dimensional Euclidean space Rd. Due to the wide existence of TIVs on the internet, the prediction accuracy of such systems is limited. Phoenix uses a matrix factorization (MF) model, which does not have the constraint of TIV. The linear dependence among the rows motivates the factorization of internet distance matrix, i.e. for a system with internet nodes, the internet distance matrix D can be factorized into two smaller matrices. where and are matrices (d << N). This matrix factorization is essentially a problem of linear dimensionality reduction and Phoenix tries to solve it in a distributed way. Design choices in Phoenix Different from the existing MF based NC systems such as IDES and DMF, Phoenix introduces a weight to each reference NC and trusts the NCs with higher weight values more than the others. The weight-based mechanism can substantially reduce the impact of the error propagation. For node discovery, Phoenix uses a distributed scheme, so-called peer exchange (PEX), which is used in BitTorrent (protocol). The usage of PEX reduces the load of the tracker, while still ensuring the prediction accuracy under node churn. Similar to DMF, for avoiding the potential drift of the NCs, Regularization (mathematics) is introduced in NC calculation. NCShield is a decentralized, goosip-based trust an" https://en.wikipedia.org/wiki/Biological%20process,"Biological processes are those processes that are vital for an organism to live, and that shape its capacities for interacting with its environment. Biological processes are made of many chemical reactions or other events that are involved in the persistence and transformation of life forms. Metabolism and homeostasis are examples. Biological processes within an organism can also work as bioindicators. Scientists are able to look at an individual's biological processes to monitor the effects of environmental changes. Regulation of biological processes occurs when any process is modulated in its frequency, rate or extent. Biological processes are regulated by many means; examples include the control of gene expression, protein modification or interaction with a protein or substrate molecule. Homeostasis: regulation of the internal environment to maintain a constant state; for example, sweating to reduce temperature Organization: being structurally composed of one or more cells – the basic units of life Metabolism: transformation of energy by converting chemicals and energy into cellular components (anabolism) and decomposing organic matter (catabolism). Living things require energy to maintain internal organization (homeostasis) and to produce the other phenomena associated with life. Growth: maintenance of a higher rate of anabolism than catabolism. A growing organism increases in size in all of its parts, rather than simply accumulating matter. Response to stimuli: a response can take many forms, from the contraction of a unicellular organism to external chemicals, to complex reactions involving all the senses of multicellular organisms. A response is often expressed by motion; for example, the leaves of a plant turning toward the sun (phototropism), and chemotaxis. Reproduction: the ability to produce new individual organisms, either asexually from a single parent organism or sexually from two parent organisms. Interaction between organisms. the processes" https://en.wikipedia.org/wiki/Optogenetics,"Optogenetics is a biological technique to control the activity of neurons or other cell types with light. This is achieved by expression of light-sensitive ion channels, pumps or enzymes specifically in the target cells. On the level of individual cells, light-activated enzymes and transcription factors allow precise control of biochemical signaling pathways. In systems neuroscience, the ability to control the activity of a genetically defined set of neurons has been used to understand their contribution to decision making, learning, fear memory, mating, addiction, feeding, and locomotion. In a first medical application of optogenetic technology, vision was partially restored in a blind patient. Optogenetic techniques have also been introduced to map the functional connectivity of the brain. By altering the activity of genetically labelled neurons with light and using imaging and electrophysiology techniques to record the activity of other cells, researchers can identify the statistical dependencies between cells and brain regions. In a broader sense, optogenetics also includes methods to record cellular activity with genetically encoded indicators. In 2010, optogenetics was chosen as the ""Method of the Year"" across all fields of science and engineering by the interdisciplinary research journal Nature Methods. At the same time, optogenetics was highlighted in the article on ""Breakthroughs of the Decade"" in the academic research journal Science. History In 1979, Francis Crick suggested that controlling all cells of one type in the brain, while leaving the others more or less unaltered, is a real challenge for neuroscience. Francis Crick speculated that a technology using light might be useful to control neuronal activity with temporal and spatial precision but at the time there was no technique to make neurons responsive to light. By early 1990s LC Katz and E Callaway had shown that light could uncage glutamate. Heberle and Büldt in 1994 had already shown fun" https://en.wikipedia.org/wiki/Graduate%20Studies%20in%20Mathematics,"Graduate Studies in Mathematics (GSM) is a series of graduate-level textbooks in mathematics published by the American Mathematical Society (AMS). The books in this series are published in hardcover and e-book formats. List of books 1 The General Topology of Dynamical Systems, Ethan Akin (1993, ) 2 Combinatorial Rigidity, Jack Graver, Brigitte Servatius, Herman Servatius (1993, ) 3 An Introduction to Gröbner Bases, William W. Adams, Philippe Loustaunau (1994, ) 4 The Integrals of Lebesgue, Denjoy, Perron, and Henstock, Russell A. Gordon (1994, ) 5 Algebraic Curves and Riemann Surfaces, Rick Miranda (1995, ) 6 Lectures on Quantum Groups, Jens Carsten Jantzen (1996, ) 7 Algebraic Number Fields, Gerald J. Janusz (1996, 2nd ed., ) 8 Discovering Modern Set Theory. I: The Basics, Winfried Just, Martin Weese (1996, ) 9 An Invitation to Arithmetic Geometry, Dino Lorenzini (1996, ) 10 Representations of Finite and Compact Groups, Barry Simon (1996, ) 11 Enveloping Algebras, Jacques Dixmier (1996, ) 12 Lectures on Elliptic and Parabolic Equations in Hölder Spaces, N. V. Krylov (1996, ) 13 The Ergodic Theory of Discrete Sample Paths, Paul C. Shields (1996, ) 14 Analysis, Elliott H. Lieb, Michael Loss (2001, 2nd ed., ) 15 Fundamentals of the Theory of Operator Algebras. Volume I: Elementary Theory, Richard V. Kadison, John R. Ringrose (1997, ) 16 Fundamentals of the Theory of Operator Algebras. Volume II: Advanced Theory, Richard V. Kadison, John R. Ringrose (1997, ) 17 Topics in Classical Automorphic Forms, Henryk Iwaniec (1997, ) 18 Discovering Modern Set Theory. II: Set-Theoretic Tools for Every Mathematician, Winfried Just, Martin Weese (1997, ) 19 Partial Differential Equations, Lawrence C. Evans (2010, 2nd ed., ) 20 4-Manifolds and Kirby Calculus, Robert E. Gompf, András I. Stipsicz (1999, ) 21 A Course in Operator Theory, John B. Conway (2000, ) 22 Growth of Algebras and Gelfand-Kirillov Dimension, Günter R. Krause, Thomas H. Lenagan (2000, Revised ed., ) 23 Foliation" https://en.wikipedia.org/wiki/Compliant%20bonding,"Compliant bonding is used to connect gold wires to electrical components such as integrated circuit ""chips"". It was invented by Alexander Coucoulas in the 1960s. The bond is formed well below the melting point of the mating gold surfaces and is therefore referred to as a solid-state type bond. The compliant bond is formed by transmitting heat and pressure to the bond region through a relatively thick indentable or compliant medium, generally an aluminum tape (Figure 1). Comparison with other solid state bond methods Solid-state or pressure bonds form permanent bonds between a gold wire and a gold metal surface by bringing their mating surfaces in intimate contact at about 300 °C which is well below their respective melting points of 1064 °C, hence the term solid-state bonds. Two commonly used methods of forming this type of bond are thermocompression bonding and thermosonic bonding. Both of these processes form the bonds with a hard faced bonding tool that makes direct contact to deform the gold wires against the gold mating surfaces (Figure 2). Since gold is the only metal that does not form an oxide coating which can interfere with making a reliable metal to metal contact, gold wires are widely used to make these important wire connections in the field of microelectronic packaging. During the compliant bonding cycle the bond pressure is uniquely controlled by the inherent flow properties of the aluminum compliant tape (Figure 3). Therefore, if higher bond pressures are needed to increase the final deformation (flatness) of a compliant bonded gold wire, a higher yielding alloy of aluminum could be employed. The use of a compliant medium also overcomes the thickness variations when attempting to bond a multiple number of conductor wires simultaneously to a gold metalized substrate (Figure 4). It also prevents the leads from being excessively deformed since the compliant member deforms around the leads during the bonding cycle thus eliminating mechanical failur" https://en.wikipedia.org/wiki/In%20situ%20hybridization,"In situ hybridization (ISH) is a type of hybridization that uses a labeled complementary DNA, RNA or modified nucleic acids strand (i.e., probe) to localize a specific DNA or RNA sequence in a portion or section of tissue (in situ) or if the tissue is small enough (e.g., plant seeds, Drosophila embryos), in the entire tissue (whole mount ISH), in cells, and in circulating tumor cells (CTCs). This is distinct from immunohistochemistry, which usually localizes proteins in tissue sections. In situ hybridization is used to reveal the location of specific nucleic acid sequences on chromosomes or in tissues, a crucial step for understanding the organization, regulation, and function of genes. The key techniques currently in use include in situ hybridization to mRNA with oligonucleotide and RNA probes (both radio-labeled and hapten-labeled), analysis with light and electron microscopes, whole mount in situ hybridization, double detection of RNAs and RNA plus protein, and fluorescent in situ hybridization to detect chromosomal sequences. DNA ISH can be used to determine the structure of chromosomes. Fluorescent DNA ISH (FISH) can, for example, be used in medical diagnostics to assess chromosomal integrity. RNA ISH (RNA in situ hybridization) is used to measure and localize RNAs (mRNAs, lncRNAs, and miRNAs) within tissue sections, cells, whole mounts, and circulating tumor cells (CTCs). In situ hybridization was invented by American biologists Mary-Lou Pardue and Joseph G. Gall. Challenges of in-situ hybridization In situ hybridization is a powerful technique for identifying specific mRNA species within individual cells in tissue sections, providing insights into physiological processes and disease pathogenesis. However, in situ hybridization requires that many steps be taken with precise optimization for each tissue examined and for each probe used. In order to preserve the target mRNA within tissues, it is often required that crosslinking fixatives (such as formaldehyde" https://en.wikipedia.org/wiki/RAID%20processing%20unit,"A RAID processing unit (RPU) is an integrated circuit that performs specialized calculations in a RAID host adapter. XOR calculations, for example, are necessary for calculating parity data, and for maintaining data integrity when writing to a disk array that uses a parity drive or data striping. An RPU may perform these calculations more efficiently than the computer's central processing unit (CPU)." https://en.wikipedia.org/wiki/Mathematical%20table,"Mathematical tables are lists of numbers showing the results of a calculation with varying arguments. Trigonometric tables were used in ancient Greece and India for applications to astronomy and celestial navigation, and continued to be widely used until electronic calculators became cheap and plentiful, in order to simplify and drastically speed up computation. Tables of logarithms and trigonometric functions were common in math and science textbooks, and specialized tables were published for numerous applications. History and use The first tables of trigonometric functions known to be made were by Hipparchus (c.190 – c.120 BCE) and Menelaus (c.70–140 CE), but both have been lost. Along with the surviving table of Ptolemy (c. 90 – c.168 CE), they were all tables of chords and not of half-chords, that is, the sine function. The table produced by the Indian mathematician Āryabhaṭa (476–550 CE) is considered the first sine table ever constructed. Āryabhaṭa's table remained the standard sine table of ancient India. There were continuous attempts to improve the accuracy of this table, culminating in the discovery of the power series expansions of the sine and cosine functions by Madhava of Sangamagrama (c.1350 – c.1425), and the tabulation of a sine table by Madhava with values accurate to seven or eight decimal places. Tables of common logarithms were used until the invention of computers and electronic calculators to do rapid multiplications, divisions, and exponentiations, including the extraction of nth roots. Mechanical special-purpose computers known as difference engines were proposed in the 19th century to tabulate polynomial approximations of logarithmic functions – that is, to compute large logarithmic tables. This was motivated mainly by errors in logarithmic tables made by the human computers of the time. Early digital computers were developed during World War II in part to produce specialized mathematical tables for aiming artillery. From 1972 onwards," https://en.wikipedia.org/wiki/Qualitative%20property,"Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics. Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement. The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable. In businesses Some important qualitative properties that concern businesses are: Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property. Environmental issues are in some cases quantitatively measurable, but other properties are qualitative e.g.: environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production. Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues. The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotiona" https://en.wikipedia.org/wiki/Microcosm%20%28experimental%20ecosystem%29,"Microcosms are artificial, simplified ecosystems that are used to simulate and predict the behaviour of natural ecosystems under controlled conditions. Open or closed microcosms provide an experimental area for ecologists to study natural ecological processes. Microcosm studies can be very useful to study the effects of disturbance or to determine the ecological role of key species. A Winogradsky column is an example of a microbial microcosm. See also Closed ecological system Ecologist Howard T. Odum was a pioneer in his use of small closed and open ecosystems in classroom teaching. Biosphere 2 - Controversial project with a 1.27 ha artificial closed ecological system in Oracle, Arizona (USA)." https://en.wikipedia.org/wiki/Autodyne,"The autodyne circuit was an improvement to radio signal amplification using the De Forest Audion vacuum tube amplifier. By allowing the tube to oscillate at a frequency slightly different from the desired signal, the sensitivity over other receivers was greatly improved. The autodyne circuit was invented by Edwin Howard Armstrong of Columbia University, New York, NY. He inserted a tuned circuit in the output circuit of the Audion vacuum tube amplifier. By adjusting the tuning of this tuned circuit, Armstrong was able to dramatically increase the gain of the Audion amplifier. Further increase in tuning resulted in the Audion amplifier reaching self-oscillation. This oscillating receiver circuit meant that the then latest technology continuous wave (CW) transmissions could be demodulated. Previously only spark, interrupted continuous wave (ICW, signals which were produced by a motor chopping or turning the signal on and off at an audio rate), or modulated continuous wave (MCW), could produce intelligible output from a receiver. When the autodyne oscillator was advanced to self-oscillation, continuous wave Morse code dots and dashes would be clearly heard from the headphones as short or long periods of sound of a particular tone, instead of an all but impossible to decode series of thumps. Spark and chopped CW (ICW) were amplitude modulated signals which didn't require an oscillating detector. Such a regenerative circuit is capable of receiving weak signals, if carefully coupled to an antenna. Antenna coupling interacts with tuning, making optimum adjustments difficult. Heterodyne detection Damped wave transmission Early transmitters emitted damped waves, which were radio frequency sine wave bursts of a number of cycles duration, of decreasing amplitude with each cycle. These bursts recurred at an audio frequency rate, producing an amplitude modulated transmission. The damped waves were a result of the available technologies to generate radio frequencies. See " https://en.wikipedia.org/wiki/UIP%20%28software%29,"The uIP is an open-source implementation of the TCP/IP network protocol stack intended for use with tiny 8- and 16-bit microcontrollers. It was initially developed by Adam Dunkels of the Networked Embedded Systems group at the Swedish Institute of Computer Science, licensed under a BSD style license, and further developed by a wide group of developers. uIP can be very useful in embedded systems because it requires very small amounts of code and RAM. It has been ported to several platforms, including DSP platforms. In October 2008, Cisco, Atmel, and SICS announced a fully compliant IPv6 extension to uIP, called uIPv6. Implementation uIP makes many unusual design choices in order to reduce the resources it requires. uIP's native software interface is designed for small computer systems with no operating system. It can be called in a timed loop, and the call manages all the retries and other network behavior. The hardware driver is called after uIP is called. uIP builds the packet, and then the driver sends it, and optionally receives a response. It is normal for IP protocol stack software to keep many copies of different IP packets, for transmission, reception and to keep copies in case they need to be resent. uIP is economical in its use of memory because it uses only one packet buffer. First, it uses the packet buffer in a half-duplex way, using it in turn for transmission and reception. Also, when uIP needs to retransmit a packet, it calls the application code in a way that requests for the previous data to be reproduced. Another oddity is how uIP manages connections. Most IP implementations have one task per connection, and the task communicates with a task in a distant computer on the other end of the connection. In uIP, no multitasking operating system is assumed. Connections are held in an array. On each call, uIP tries to serve a connection, making a subroutine call to application code that responds to, or sends data. The size of the connection ar" https://en.wikipedia.org/wiki/Summation,"In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted ""+"" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article. The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted , and results in 9, that is, . Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands. Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence. For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as . Otherwise, summation is denoted by using Σ notation, where is an enlarged capital Greek letter sigma. For example, the sum of the first natural numbers can be denoted as For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result. For example, Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article. Notation Capital-sigma notation Mathematical notation uses a symbol that compactly represents summation of many similar terms: the summation symbol, , an enlarged form of the upright capital Greek l" https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20gamma%20function,"The gamma function is an important special function in mathematics. Its particular values can be expressed in closed form for integer and half-integer arguments, but no simple expressions are known for the values at rational points in general. Other fractional arguments can be approximated through efficient infinite products, infinite series, and recurrence relations. Integers and half-integers For positive integer arguments, the gamma function coincides with the factorial. That is, and hence and so on. For non-positive integers, the gamma function is not defined. For positive half-integers, the function values are given exactly by or equivalently, for non-negative integer values of : where denotes the double factorial. In particular, {| |- | | | | |- | | | | |- | | | | |- | | | | |} and by means of the reflection formula, {| |- | | | | |- | | | | |- | | | | |} General rational argument In analogy with the half-integer formula, where denotes the th multifactorial of . Numerically, . As tends to infinity, where is the Euler–Mascheroni constant and denotes asymptotic equivalence. It is unknown whether these constants are transcendental in general, but and were shown to be transcendental by G. V. Chudnovsky. has also long been known to be transcendental, and Yuri Nesterenko proved in 1996 that , , and are algebraically independent. The number is related to the lemniscate constant by and it has been conjectured by Gramain that where is the Masser–Gramain constant , although numerical work by Melquiond et al. indicates that this conjecture is false. Borwein and Zucker have found that can be expressed algebraically in terms of , , , , and where is a complete elliptic integral of the first kind. This permits efficiently approximating the gamma function of rational arguments to high precision using quadratically convergent arithmetic–geometric mean iterations. For example: No similar relations are known for or other denominator" https://en.wikipedia.org/wiki/Creation%20and%20evolution%20in%20public%20education,"The status of creation and evolution in public education has been the subject of substantial debate and conflict in legal, political, and religious circles. Globally, there are a wide variety of views on the topic. Most western countries have legislation that mandates only evolutionary biology is to be taught in the appropriate scientific syllabuses. Overview While many Christian denominations do not raise theological objections to the modern evolutionary synthesis as an explanation for the present forms of life on planet Earth, various socially conservative, traditionalist, and fundamentalist religious sects and political groups within Christianity and Islam have objected vehemently to the study and teaching of biological evolution. Some adherents of these Christian and Islamic religious sects or political groups are passionately opposed to the consensus view of the scientific community. Literal interpretations of religious texts are the greatest cause of conflict with evolutionary and cosmological investigations and conclusions. Internationally, biological evolution is taught in science courses with limited controversy, with the exception of a few areas of the United States and several Muslim-majority countries, primarily Turkey. In the United States, the Supreme Court has ruled the teaching of creationism as science in public schools to be unconstitutional, irrespective of how it may be purveyed in theological or religious instruction. In the United States, intelligent design (ID) has been represented as an alternative explanation to evolution in recent decades, but its ""demonstrably religious, cultural, and legal missions"" have been ruled unconstitutional by a lower court. By country Australia Although creationist views are popular among religious education teachers and creationist teaching materials have been distributed by volunteers in some schools, many Australian scientists take an aggressive stance supporting the right of teachers to teach the theory " https://en.wikipedia.org/wiki/Live%20crown,"The live crown is the top part of a tree, the part that has green leaves (as opposed to the bare trunk, bare branches, and dead leaves). The ratio of the size of a tree's live crown to its total height is used in estimating its health and its level of competition with neighboring trees. Trees Biology terminology Sustainable forest management" https://en.wikipedia.org/wiki/UniPro%20protocol%20stack,"In mobile-telephone technology, the UniPro protocol stack follows the architecture of the classical OSI Reference Model. In UniPro, the OSI Physical Layer is split into two sublayers: Layer 1 (the actual physical layer) and Layer 1.5 (the PHY Adapter layer) which abstracts from differences between alternative Layer 1 technologies. The actual physical layer is a separate specification as the various PHY options are reused in other MIPI Alliance specifications. The UniPro specification itself covers Layers 1.5, 2, 3, 4 and the DME (Device Management Entity). The Application Layer (LA) is out of scope because different uses of UniPro will require different LA protocols. The Physical Layer (L1) is covered in separate MIPI specifications in order to allow the PHY to be reused by other (less generic) protocols if needed. OSI Layers 5 (Session) and 6 (Presentation) are, where applicable, counted as part of the Application Layer. Physical Layer (L1) D-PHY Versions 1.0 and 1.1 of UniPro use MIPI's D-PHY technology for the off-chip Physical Layer. This PHY allows inter-chip communication. Data rates of the D-PHY are variable, but are in the range of 500-1000 Mbit/s (lower speeds are supported, but at decreased power efficiency). The D-PHY was named after the Roman number for 500 (""D""). The D-PHY uses differential signaling to convey PHY symbols over micro-stripline wiring. A second differential signal pair is used to transmit the associated clock signal from the source to the destination. The D-PHY technology thus uses a total of 2 clock wires per direction plus 2 signal wires per lane and per direction. For example, a D-PHY might use 2 wires for the clock and 4 wires (2 lanes) for the data in the forward direction, but 2 wires for the clock and 6 wires (3 lanes) for the data in the reverse direction. Data traffic in the forward and reverse directions are totally independent at this level of the protocol stack. In UniPro, the D-PHY is used in a mode (called ""8b9b"" encod" https://en.wikipedia.org/wiki/Mariam%20Nabatanzi,"Mariam Nabatanzi Babirye (born ) also known as Maama Uganda or Mother Uganda, is a Ugandan woman known for birthing 44 children. As of April 2023, her eldest children were twenty-eight years old, and the youngest were six years old. She is a single mother, who was abandoned by her husband in 2015. He reportedly feared the responsibility of supporting so many children. Born around 1980, Babirye first gave birth when she was 13 years old, having been forced into marriage the year prior. By the age of 36, she had given birth to a total of 44 children, including three sets of quadruplets, four sets of triplets, and six sets of twins, for a total of fifteen births. The number of multiple births was caused by a rare genetic condition causing hyperovulation as a result of enlarged ovaries. In 2019, when Babirye was aged 40, she underwent a medical procedure to prevent any further pregnancies. She lives in the village of Kasawo, located in the Mukono district of Central Uganda. Life and background In 1993, Babirye was sold into child marriage at the age of twelve to a violent 40-year-old man. A year later, she first became a mother in 1994 with a set of twins, followed by triplets in 1996. She then gave birth to a set of quadruplets 19 months later. She never found the rate at which she was procreating unusual due to her family history; she had been quoted as saying: ""My father gave birth to forty-five children with different women, and these all came in quintuplets, quadruples, twins and triplets."" In Uganda, there are some communities that practice early child marriages, where a young girl is given off to an older man in exchange for a dowry that most frequently consists of cows. Babirye's marriage was an example of this. At the age of twenty-three, she had given birth to twenty-five children, but was advised to continue giving birth, as it would help reduce further fertility. Those affected with Babirye's condition are often advised that abstinence from pregnancy ca" https://en.wikipedia.org/wiki/Tally%20stick,"A tally stick (or simply tally) was an ancient memory aid device used to record and document numbers, quantities and messages. Tally sticks first appear as animal bones carved with notches during the Upper Palaeolithic; a notable example is the Ishango Bone. Historical reference is made by Pliny the Elder (AD 23–79) about the best wood to use for tallies, and by Marco Polo (1254–1324) who mentions the use of the tally in China. Tallies have been used for numerous purposes such as messaging and scheduling, and especially in financial and legal transactions, to the point of being currency. Kinds of tallies Principally, there are two different kinds of tally sticks: the single tally and the split tally. A common form of the same kind of primitive counting device is seen in various kinds of prayer beads. Possible palaeolithic tally sticks A number of anthropological artefacts have been conjectured to be tally sticks: The Lebombo bone, dated between 44,200 and 43,000 years old, is a baboon's fibula with 29 distinct notches, discovered within the Border Cave in the Lebombo Mountains of Eswatini. The so-called Wolf bone (cs) is a prehistoric artefact discovered in 1937 in Czechoslovakia during excavations at Dolní Věstonice, Moravia, led by Karl Absolon. Dated to the Aurignacian, approximately 30,000 years ago, the bone is marked with 55 marks which some believe to be tally marks. The head of an ivory Venus figurine was excavated close to the bone. The Ishango bone is a bone tool, dated to the Upper Palaeolithic era, around 18,000 to 20,000 BC. It is a dark brown length of bone. It has a series of possible tally marks carved in three columns running the length of the tool. It was found in 1950 in Ishango (east Belgian Congo). Single tally The single tally stick was an elongated piece of bone, ivory, wood, or stone which is marked with a system of notches (see: Tally marks). The single tally stick serves predominantly mnemonic purposes. Related to the single tally con" https://en.wikipedia.org/wiki/Sca-1,"Sca-1 stands for ""Stem cells antigen-1"" (official gene symbol: Ly6a). It consist of 18-kDa mouse glycosyl phosphatidylinositol-anchored cell surface protein (GPI-AP) of the LY6 gene family. It is the common biological marker used to identify hematopoitic stem cell (HSC) along with other markers. Application of Sca-1 Sca-1 has a regenerative role in cardiac repair: Host cells with specific Sca-1+CD31− markers arise upon myocardial infarction, with evidence of expression of Sca-1 protein. Sca-1 plays a role in hematopoietic progenitor/stem cell lineage fate and c-kit expression." https://en.wikipedia.org/wiki/Exceptional%20object,"Many branches of mathematics study objects of a given type and prove a classification theorem. A common theme is that the classification results in a number of series of objects and a finite number of exceptions — often with desirable properties — that do not fit into any series. These are known as exceptional objects. In many cases, these exceptional objects play a further and important role in the subject. Furthermore, the exceptional objects in one branch of mathematics often relate to the exceptional objects in others. A related phenomenon is exceptional isomorphism, when two series are in general different, but agree for some small values. For example, spin groups in low dimensions are isomorphic to other classical Lie groups. Regular polytopes The prototypical examples of exceptional objects arise in the classification of regular polytopes: in two dimensions, there is a series of regular n-gons for n ≥ 3. In every dimension above 2, one can find analogues of the cube, tetrahedron and octahedron. In three dimensions, one finds two more regular polyhedra — the dodecahedron (12-hedron) and the icosahedron (20-hedron) — making five Platonic solids. In four dimensions, a total of six regular polytopes exist, including the 120-cell, the 600-cell and the 24-cell. There are no other regular polytopes, as the only regular polytopes in higher dimensions are of the hypercube, simplex, orthoplex series. In all dimensions combined, there are therefore three series and five exceptional polytopes. Moreover, the pattern is similar if non-convex polytopes are included: in two dimensions, there is a regular star polygon for every rational number . In three dimensions, there are four Kepler–Poinsot polyhedra, and in four dimensions, ten Schläfli–Hess polychora; in higher dimensions, there are no non-convex regular figures. These can be generalized to tessellations of other spaces, especially uniform tessellations, notably tilings of Euclidean space (honeycombs), which hav" https://en.wikipedia.org/wiki/Fractional%20lambda%20switching,"Fractional lambda switching (FλS) leverages on time-driven switching (TDS) to realize sub-lambda switching in highly scalable dynamic optical networking, which requires minimum (possibly optical) buffers. Fractional lambda switching implies switching fractions of optical channels as opposed to whole lambda switching where whole optical channels are the switching unit. In this context, TDS has the same general objectives as optical burst switching and optical packet switching: realizing all-optical networks with high wavelength utilization. TDS operation is based on time frames (TFs) that can be viewed as virtual containers for multiple IP packets that are switched at every TDS switch based on and coordinated by the UTC (coordinated universal time) signal implementing pipeline forwarding. In the context of optical networks, synchronous virtual pipes SVPs typical of pipeline forwarding are called fractional lambda pipes (FλPs). In FλS, likewise in TDS, all packets in the same time frame are switched in the same way. Consequently, header processing is not required, which results in low complexity (hence high scalability) and enables optical implementation. The TF is the basic SVP capacity allocation unit; hence, the allocation granularity depends on the number of TFs per time cycle. For example, with a 10 Gbit/s optical channel and 1000 TFs in each time cycle, the minimum FλP capacity (obtained by allocating one TF in every time cycle) is 10 Mbit/s. Scheduling through a switching fabric is based on a pre-defined schedule, which enables the implementation of a simple controller. Moreover, low-complexity switching fabric architectures, such as Banyan, can be deployed notwithstanding their blocking features, thus further enhancing scalability. In fact, blocking can be avoided during schedule computation by avoiding conflicting input/output connections during the same TF. Several results show that (especially if multiple wavelength division multiplexing channels are dep" https://en.wikipedia.org/wiki/Wireless%20data%20center,"A Wireless Data center is a type of data center that uses wireless communication technology instead of cables to store, process and retrieve data for enterprises. The development of Wireless Data centers arose as a solution to growing cabling complexity and hotspots. The wireless technology was introduced by Shin et al., who replaced all cables with 60 GHz wireless connections at the Cayley data center. Motivation Most DCs deployed today can be classified as wired DCs because they use copper and optical fiber cables to handle intra- and inter-rack connections in the network. This approach has two problems, cable complexity and hotspots. Hotspots, also known as hot servers, are servers that generate high traffic compared to others in the network and they might become bottlenecks of the system. To address these problems several researchers propose the use of wireless communication into data center networks, to either augment existing wired data centers, or to realize a pure wireless data center Although cable complexity at first seems like an esthetical problem, it can affect a DC in different ways. First, a significant manual effort is necessary to install and manage these cables. Apart from that, cables can additionally affect data center cooling. Finally, cables take up space, which could be used to add more servers. The use of wireless technologies could reduce the cable complexity and avoid the problems cited before, moreover, it would allow for automatic configurable link establishment between nodes with minimum effort. Wireless links can be rearranged dynamically which makes it possible to perform adaptive topology adjustment. This means that the network can be rearranged to fulfil the real-time traffic demands of hotspots, thus solving the hot servers problem. Additionally, wireless connections do not rely on switches and therefore are free of problems such as single-point of failure and limited bisection bandwidth. Requirements The Data Center Network (" https://en.wikipedia.org/wiki/List%20of%20mathematics-based%20methods,"This is a list of mathematics-based methods. Adams' method (differential equations) Akra–Bazzi method (asymptotic analysis) Bisection method (root finding) Brent's method (root finding) Condorcet method (voting systems) Coombs' method (voting systems) Copeland's method (voting systems) Crank–Nicolson method (numerical analysis) D'Hondt method (voting systems) D21 – Janeček method (voting system) Discrete element method (numerical analysis) Domain decomposition method (numerical analysis) Epidemiological methods Euler's forward method Explicit and implicit methods (numerical analysis) Finite difference method (numerical analysis) Finite element method (numerical analysis) Finite volume method (numerical analysis) Highest averages method (voting systems) Method of exhaustion Method of infinite descent (number theory) Information bottleneck method Inverse chain rule method (calculus) Inverse transform sampling method (probability) Iterative method (numerical analysis) Jacobi method (linear algebra) Largest remainder method (voting systems) Level-set method Linear combination of atomic orbitals molecular orbital method (molecular orbitals) Method of characteristics Least squares method (optimization, statistics) Maximum likelihood method (statistics) Method of complements (arithmetic) Method of moving frames (differential geometry) Method of successive substitution (number theory) Monte Carlo method (computational physics, simulation) Newton's method (numerical analysis) Pemdas method (order of operation) Perturbation methods (functional analysis, quantum theory) Probabilistic method (combinatorics) Romberg's method (numerical analysis) Runge–Kutta method (numerical analysis) Sainte-Laguë method (voting systems) Schulze method (voting systems) Sequential Monte Carlo method Simplex method Spectral method (numerical analysis) Variational methods (mathematical analysis, differential equations) Welch's method See also Automatic basis function construction List of graphi" https://en.wikipedia.org/wiki/Examples%20of%20vector%20spaces,"This page lists some examples of vector spaces. See vector space for the definitions of terms used on this page. See also: dimension, basis. Notation. Let F denote an arbitrary field such as the real numbers R or the complex numbers C. Trivial or zero vector space The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over F. Every vector space over F contains a subspace isomorphic to this one. The zero vector space is conceptually different from the null space of a linear operator L, which is the kernel of L. (Incidentally, the null space of L is a zero space if and only if L is injective.) Field The next simplest example is the field F itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of F serves as a basis so F is a 1-dimensional vector space over itself. The field is a rather special vector space; in fact it is the simplest example of a commutative algebra over F. Also, F has just two subspaces: {0} and F itself. Coordinate space A basic example of a vector space is the following. For any positive integer n, the set of all n-tuples of elements of F forms an n-dimensional vector space over F sometimes called coordinate space and denoted Fn. An element of Fn is written where each xi is an element of F. The operations on Fn are defined by Commonly, F is the field of real numbers, in which case we obtain real coordinate space Rn. The field of complex numbers gives complex coordinate space Cn. The a + bi form of a complex number shows that C itself is a two-dimensional real vector space with coordinates (a,b). Similarly, the quaternions and the octonions are respectively" https://en.wikipedia.org/wiki/Sneakernet,"Sneakernet, also called sneaker net, is an informal term for the transfer of electronic information by physically moving media such as magnetic tape, floppy disks, optical discs, USB flash drives or external hard drives between computers, rather than transmitting it over a computer network. The term, a tongue-in-cheek play on net(work) as in Internet or Ethernet, refers to walking in sneakers as the transport mechanism. Alternative terms may be floppy net, train net, or pigeon net. Summary and background Sneakernets are in use throughout the computer universe. A sneakernet may be used when computer networks are prohibitively expensive for the owner to maintain; in high-security environments where manual inspection (for re-classification of information) is necessary; where information needs to be shared between networks with different levels of security clearance; when data transfer is impractical due to bandwidth limitations; when a particular system is simply incompatible with the local network, unable to be connected, or when two systems are not on the same network at the same time. Because sneakernets take advantage of physical media, security measures used for the transfer of sensitive information are respectively physical. This form of data transfer is also used for peer-to-peer (or friend-to-friend) file sharing and has grown in popularity in metropolitan areas and college communities. The ease of this system has been facilitated by the availability of USB external hard drives, USB flash drives and portable music players. The United States Postal Service offers a Media Mail service for compact discs, among other items. This provides a viable mode of transport for long distance sneakernet use. In fact, when mailing media with sufficiently high data density such as high capacity hard drives, the throughput (data transferred per unit of time) as well as the cost per unit of data transferred may compete favorably with networked methods of data transfer. Usage" https://en.wikipedia.org/wiki/Suctorial,"Suctorial pertains to the adaptation for sucking or suction, as possessed by marine parasites such as the Cookiecutter shark, specifically in a specialised lip organ enabling attachment to the host. Suctorial organs of a different form are possessed by the Solifugae arachnids, enabling the climbing of smooth, vertical surfaces. Another variation on the suctorial organ can be found as part of the glossa proboscis of Masarinae (pollen wasps), enabling nectar feeding from the deep and narrow corolla of flowers." https://en.wikipedia.org/wiki/Christoffel%20symbols,"In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distances to be measured on that surface. In differential geometry, an affine connection can be defined without reference to a metric, and many additional concepts follow: parallel transport, covariant derivatives, geodesics, etc. also do not require the concept of a metric. However, when a metric is available, these concepts can be directly tied to the ""shape"" of the manifold itself; that shape is determined by how the tangent space is attached to the cotangent space by the metric tensor. Abstractly, one would say that the manifold has an associated (orthonormal) frame bundle, with each ""frame"" being a possible choice of a coordinate frame. An invariant metric implies that the structure group of the frame bundle is the orthogonal group . As a result, such a manifold is necessarily a (pseudo-)Riemannian manifold. The Christoffel symbols provide a concrete representation of the connection of (pseudo-)Riemannian geometry in terms of coordinates on the manifold. Additional concepts, such as parallel transport, geodesics, etc. can then be expressed in terms of Christoffel symbols. In general, there are an infinite number of metric connections for a given metric tensor; however, there is a unique connection that is free of torsion, the Levi-Civita connection. It is common in physics and general relativity to work almost exclusively with the Levi-Civita connection, by working in coordinate frames (called holonomic coordinates) where the torsion vanishes. For example, in Euclidean spaces, the Christoffel symbols describe how the local coordinate bases change from point to point. At each point of the underlying -dimensional manifold, for any local coordinate system around that point, the Christoffel symbols are denoted for . Each entry" https://en.wikipedia.org/wiki/Register%E2%80%93memory%20architecture,"In computer engineering, a register–memory architecture is an instruction set architecture that allows operations to be performed on (or from) memory, as well as registers. If the architecture allows all operands to be in memory or in registers, or in combinations, it is called a ""register plus memory"" architecture. In a register–memory approach one of the operands for operations such as the ADD operation may be in memory, while the other is in a register. This differs from a load–store architecture (used by RISC designs such as MIPS) in which both operands for an ADD operation must be in registers before the ADD. An example of register-memory architecture is Intel x86. Examples of register plus memory architecture are: IBM System/360 and its successors, which support memory-to-memory fixed-point decimal arithmetic operations, but not binary integer or floating-point arithmetic operations; VAX, which supports memory or register source and destination operands for binary integer and floating-point arithmetic; the Motorola 68000 series, which supports integer arithmetic with a memory source or destination, but not with a memory source and destination. See also Load–store architecture Addressing mode" https://en.wikipedia.org/wiki/Radical%20%28chemistry%29,"In chemistry, a radical, also known as a free radical, is an atom, molecule, or ion that has at least one unpaired valence electron. With some exceptions, these unpaired electrons make radicals highly chemically reactive. Many radicals spontaneously dimerize. Most organic radicals have short lifetimes. A notable example of a radical is the hydroxyl radical (HO·), a molecule that has one unpaired electron on the oxygen atom. Two other examples are triplet oxygen and triplet carbene (꞉) which have two unpaired electrons. Radicals may be generated in a number of ways, but typical methods involve redox reactions, Ionizing radiation, heat, electrical discharges, and electrolysis are known to produce radicals. Radicals are intermediates in many chemical reactions, more so than is apparent from the balanced equations. Radicals are important in combustion, atmospheric chemistry, polymerization, plasma chemistry, biochemistry, and many other chemical processes. A majority of natural products are generated by radical-generating enzymes. In living organisms, the radicals superoxide and nitric oxide and their reaction products regulate many processes, such as control of vascular tone and thus blood pressure. They also play a key role in the intermediary metabolism of various biological compounds. Such radicals can even be messengers in a process dubbed redox signaling. A radical may be trapped within a solvent cage or be otherwise bound. Formation Radicals are either (1) formed from spin-paired molecules or (2) from other radicals. Radicals are formed from spin-paired molecules through homolysis of weak bonds or electron transfer, also known as reduction. Radicals are formed from other radicals through substitution, addition, and elimination reactions. Radical formation from spin-paired molecules Homolysis Homolysis makes two new radicals from a spin-paired molecule by breaking a covalent bond, leaving each of the fragments with one of the electrons in the bond. Bec" https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20statistics,"In quantum statistics, Bose–Einstein statistics (B–E statistics) describes one of two possible ways in which a collection of non-interacting identical particles may occupy a set of available discrete energy states at thermodynamic equilibrium. The aggregation of particles in the same state, which is a characteristic of particles obeying Bose–Einstein statistics, accounts for the cohesive streaming of laser light and the frictionless creeping of superfluid helium. The theory of this behaviour was developed (1924–25) by Satyendra Nath Bose, who recognized that a collection of identical and indistinguishable particles can be distributed in this way. The idea was later adopted and extended by Albert Einstein in collaboration with Bose. Bose–Einstein statistics apply only to particles that do not follow the Pauli exclusion principle restrictions. Particles that follow Bose-Einstein statistics are called bosons, which have integer values of spin. In contrast, particles that follow Fermi-Dirac statistics are called fermions and have half-integer spins. Bose–Einstein distribution At low temperatures, bosons behave differently from fermions (which obey the Fermi–Dirac statistics) in a way that an unlimited number of them can ""condense"" into the same energy state. This apparently unusual property also gives rise to the special state of matter – the Bose–Einstein condensate. Fermi–Dirac and Bose–Einstein statistics apply when quantum effects are important and the particles are ""indistinguishable"". Quantum effects appear if the concentration of particles satisfies where is the number of particles, is the volume, and is the quantum concentration, for which the interparticle distance is equal to the thermal de Broglie wavelength, so that the wavefunctions of the particles are barely overlapping. Fermi–Dirac statistics applies to fermions (particles that obey the Pauli exclusion principle), and Bose–Einstein statistics applies to bosons. As the quantum concentration depends" https://en.wikipedia.org/wiki/Near%E2%80%93far%20problem,"The near–far problem or hearability problem is the effect of a strong signal from a near signal source in making it hard for a receiver to hear a weaker signal from a further source due to adjacent-channel interference, co-channel interference, distortion, capture effect, dynamic range limitation, or the like. Such a situation is common in wireless communication systems, in particular CDMA. In some signal jamming techniques, the near–far problem is exploited to disrupt (""jam"") communications. Analogies Consider a receiver and two transmitters, one close to the receiver, the other far away. If both transmitters transmit simultaneously and at equal powers, then due to the inverse square law the receiver will receive more power from the nearer transmitter. Since one transmission's signal is the other's noise, the signal-to-noise ratio (SNR) for the further transmitter is much lower. This makes the farther transmitter more difficult, if not impossible, to understand. In short, the near–far problem is one of detecting or filtering out a weaker signal amongst stronger signals. To place this problem in more common terms, imagine you are talking to someone 6 meters away. If the two of you are in a quiet, empty room then a conversation is quite easy to hold at normal voice levels. In a loud, crowded bar, it would be impossible to hear the same voice level, and the only solution (for that distance) is for both you and your friend to speak louder. Of course, this increases the overall noise level in the bar, and every other patron has to talk louder too (this is equivalent to power control runaway). Eventually, everyone has to shout to make themselves heard by a person standing right beside them, and it is impossible to communicate with anyone more than half a meter away. In general, however, a human is very capable of filtering out loud sounds; similar techniques can be deployed in signal processing where suitable criteria for distinguishing between signals can be establis" https://en.wikipedia.org/wiki/Software%20bot,"A software bot is a type of software agent in the service of software project management and software engineering. A software bot has an identity and potentially personified aspects in order to serve their stakeholders. Software bots often compose software services and provide an alternative user interface, which is sometimes, but not necessarily conversational. Software bots are typically used to execute tasks, suggest actions, engage in dialogue, and promote social and cultural aspects of a software project. The term bot is derived from robot. However, robots act in the physical world and software bots act only in digital spaces. Some software bots are designed and behave as chatbots, but not all chatbots are software bots. Erlenhov et al. discuss the past and future of software bots and show that software bots have been adopted for many years. Usage Software bots are used to support development activities, such as communication among software developers and automation of repetitive tasks. Software bots have been adopted by several communities related to software development, such as open-source communities on GitHub and Stack Overflow. GitHub bots have user accounts and can open, close, or comment on pull requests and issues. GitHub bots have been used to assign reviewers, ask contributors to sign the Contributor License Agreement, report continuous integration failures, review code and pull requests, welcome newcomers, run automated tests, merge pull requests, fix bugs and vulnerabilities, etc. The Slack tool includes an API for developing software bots. There are slack bots for keeping track of todo lists, coordinating standup meetings, and managing support tickets. The Chatbot company products further simplify the process of creating a custom Slack bot. On Wikipedia, Wikipedia bots automate a variety of tasks, such as creating stub articles, consistently updating the format of multiple articles, and so on. Bots like ClueBot NG are capable of recogniz" https://en.wikipedia.org/wiki/Overlap%E2%80%93save%20method,"In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal and a finite impulse response (FIR) filter : where for m outside the region . This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation). The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. Consider a segment that begins at n = kL + M, for any integer k, and define: Then, for , and equivalently , we can write: With the substitution , the task is reduced to computing for . These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1  ≤   ≤  L. If we periodically extend xk[n] with period N  ≥  L + M − 1, according to:the convolutions    and    are equivalent in the region . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of with   in the region [1, N].  The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded.  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem:where:DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. The leading and trailing edge-effects of circular convolution are overlapped and added, and subsequently discarded. Pseudocode (Overlap-save algorithm for linear convolution) h = FIR_impulse_response M = length(h) overlap = M − 1 N = 8 × overlap (see next section for a better choice) step_size = N − overlap H = DFT(h, N) position = 0 while pos" https://en.wikipedia.org/wiki/P%E2%80%93n%20junction%20isolation,"p–n junction isolation is a method used to electrically isolate electronic components, such as transistors, on an integrated circuit (IC) by surrounding the components with reverse biased p–n junctions. Introduction By surrounding a transistor, resistor, capacitor or other component on an IC with semiconductor material which is doped using an opposite species of the substrate dopant, and connecting this surrounding material to a voltage which reverse-biases the p–n junction that forms, it is possible to create a region which forms an electrically isolated ""well"" around the component. Operation Assume that the semiconductor wafer is p-type material. Also assume a ring of n-type material is placed around a transistor, and placed beneath the transistor. If the p-type material within the n-type ring is now connected to the negative terminal of the power supply and the n-type ring is connected to the positive terminal, the 'holes' in the p-type region are pulled away from the p–n junction, causing the width of the nonconducting depletion region to increase. Similarly, because the n-type region is connected to the positive terminal, the electrons will also be pulled away from the junction. This effectively increases the potential barrier and greatly increases the electrical resistance against the flow of charge carriers. For this reason there will be no (or minimal) electric current across the junction. At the middle of the junction of the p–n material, a depletion region is created to stand-off the reverse voltage. The width of the depletion region grows larger with higher voltage. The electric field grows as the reverse voltage increases. When the electric field increases beyond a critical level, the junction breaks down and current begins to flow by avalanche breakdown. Therefore, care must be taken that circuit voltages do not exceed the breakdown voltage or electrical isolation ceases. History In an article entitled ""Microelectronics"", published in Scientifi" https://en.wikipedia.org/wiki/Fractal%20expressionism,"Fractal expressionism is used to distinguish fractal art generated directly by artists from fractal art generated using mathematics and/or computers. Fractals are patterns that repeat at increasingly fine scales and are prevalent in natural scenery (examples include clouds, rivers, and mountains). Fractal expressionism implies a direct expression of nature's patterns in an art work. Jackson Pollock's poured paintings The initial studies of fractal expressionism focused on the poured paintings by Jackson Pollock (1912-1956), whose work has traditionally been associated with the abstract expressionist movement. Pollock's patterns had previously been referred to as “natural” and “organic”, inviting speculation by John Briggs in 1992 that Pollock's work featured fractals. In 1997, Taylor built a pendulum device called the Pollockizer which painted fractal patterns bearing a similarity to Pollock's work. Computer analysis of Pollock's work published by Taylor et al. in a 1999 Nature article found that Pollock's painted patterns have characteristics that match those displayed by nature's fractals. This analysis supported clues that Pollock's patterns are fractal and reflect ""the fingerprint of nature"". Taylor noted several similarities between Pollock's painting style and the processes used by nature to construct its landscapes. For instance, he cites Pollock's propensity to revisit paintings that he had not adjusted in several weeks as being comparable to cyclic processes in nature, such as the seasons or the tides. Furthermore, Taylor observed several visual similarities between the patterns produced by nature and those produced by Pollock as he painted. He points out that Pollock abandoned the use of a traditional frame for his paintings, preferring instead to roll out his canvas on the floor; this, Taylor asserts, is more compatible with how nature works than traditional painting techniques because the patterns in nature's scenery are not artificially bounded. " https://en.wikipedia.org/wiki/List%20of%20mathematical%20artists,"[[File:San Romano Battle (Paolo Uccello, London) 01.jpg|thumb|350px|Broken lances lying along perspective lines in Paolo Uccello's The Battle of San Romano, 1438]] This is a list of artists who actively explored mathematics in their artworks. Art forms practised by these artists include painting, sculpture, architecture, textiles and origami. Some artists such as Piero della Francesca and Luca Pacioli went so far as to write books on mathematics in art. Della Francesca wrote books on solid geometry and the emerging field of perspective, including De Prospectiva Pingendi (On Perspective for Painting), Trattato d’Abaco (Abacus Treatise), and De corporibus regularibus (Regular Solids),Piero della Francesca, Trattato d'Abaco, ed. G. Arrighi, Pisa (1970). while Pacioli wrote De divina proportione (On Divine Proportion)'', with illustrations by Leonardo da Vinci, at the end of the fifteenth century. Merely making accepted use of some aspect of mathematics such as perspective does not qualify an artist for admission to this list. The term ""fine art"" is used conventionally to cover the output of artists who produce a combination of paintings, drawings and sculptures. List" https://en.wikipedia.org/wiki/Time%20reversal%20signal%20processing,"Time reversal signal processing is a signal processing technique that has three main uses: creating an optimal carrier signal for communication, reconstructing a source event, and focusing high-energy waves to a point in space. A Time Reversal Mirror (TRM) is a device that can focus waves using the time reversal method. TRMs are also known as time reversal mirror arrays since they are usually arrays of transducers. TRM are well-known and have been used for decades in the optical domain. They are also used in the ultrasonic domain. Overview If the source is passive, i.e. some type of isolated reflector, an iterative technique can be used to focus energy on it. The TRM transmits a plane wave which travels toward the target and is reflected off it. The reflected wave returns to the TRM, where it looks as if the target has emitted a (weak) signal. The TRM reverses and retransmits the signal as usual, and a more focused wave travels toward the target. As the process is repeated, the waves become more and more focused on the target. Yet another variation is to use a single transducer and an ergodic cavity. Intuitively, an ergodic cavity is one that will allow a wave originating at any point to reach any other point. An example of an ergodic cavity is an irregularly shaped swimming pool: if someone dives in, eventually the entire surface will be rippling with no clear pattern. If the propagation medium is lossless and the boundaries are perfect reflectors, a wave starting at any point will reach all other points an infinite number of times. This property can be exploited by using a single transducer and recording for a long time to get as many reflections as possible. Theory The time reversal technique is based upon a feature of the wave equation known as reciprocity: given a solution to the wave equation, then the time reversal (using a negative time) of that solution is also a solution. This occurs because the standard wave equation only contains even order " https://en.wikipedia.org/wiki/Lego%20Mindstorms,"Lego Mindstorms (sometimes stylized as LEGO MINDSTORMS) is a discontinued hardware and software structure which develops programmable robots based on Lego bricks. Each version included a programmable microcontroller (or intelligent brick), a set of modular sensors and motors, and parts from the Lego Technic line to create mechanical systems. The system is controlled by the intelligent brick, which acts as the brain of the mechanical system. While originally conceptualized and launched as a tool to support educational constructivism, Mindstorms has become the first home robotics kit available to a wide audience. It has developed a community of adult hobbyists and hackers as well as students and general Lego enthusiasts following the product's launch in 1998. In October 2022, The Lego Group announced that the Lego Mindstorms brand would be discontinued by the end of the year. Pre-Mindstorms Background In 1985, Seymour Papert, Mitchel Resnick and Stephen Ocko created a company called Microworlds with the intent of developing a construction kit that could be animated by computers for educational purposes. Papert had previously created the Logo programming language as a tool to ""support the development of new ways of thinking and learning"", and employed ""Turtle"" robots to physically act out the programs in the real world. As the types of programs created were limited by the shape of the Turtle, the idea came up to make a construction kit that could use Logo commands to animate a creation of the learner's own design. Similar to the ""floor turtle"" robots used to demonstrate Logo commands in the real world, a construction system that ran Logo commands would also demonstrate them in the real world, but allowing the child to construct their own creations benefitted the learning experience by putting them in control In considering which construction system to partner with, they wanted a ""low floor high ceiling"" approach, something that was easy to pick up but very powerfu" https://en.wikipedia.org/wiki/Stream%20processing,"In computer science, stream processing (also known as event stream processing, data stream processing, or distributed stream processing) is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing. Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation. The software stack for these systems includes components such as programming models and query languages, for expressing computation; stream management systems, for distribution and scheduling; and hardware components for acceleration including floating-point units, graphics processing units, and field-programmable gate arrays. The stream processing paradigm simplifies parallel software and hardware by restricting the parallel computation that can be performed. Given a sequence of data (a stream), a series of operations (kernel functions) is applied to each element in the stream. Kernel functions are usually pipelined, and optimal local on-chip memory reuse is attempted, in order to minimize the loss in bandwidth, associated with external memory interaction. Uniform streaming, where one kernel function is applied to all elements in the stream, is typical. Since the kernel and stream abstractions expose data dependencies, compiler tools can fully automate and optimize on-chip management tasks. Stream processing hardware can use scoreboarding, for example, to initiate a direct memory access (DMA) when dependencies become known. The elimination of manual DMA management reduces software complexity, and an associated elimination for hardware cached I/O, reduces the data area expanse that has to be involved with service by specialized computational units such as arithmetic logic units. During the 1980s stream processing was explored within dataflow programming. " https://en.wikipedia.org/wiki/Global%20Census%20of%20Marine%20Life%20on%20Seamounts,"Global Census of Marine Life on Seamounts (commonly CenSeam) is a global scientific initiative, launched in 2005, that is designed to expand the knowledge base of marine life at seamounts. Seamounts are underwater mountains, not necessarily volcanic in origin, which often form subsurface archipelagoes and are found throughout the world's ocean basins, with almost half in the Pacific. There are estimated to be as many as 100,000 seamounts at least one kilometer in height, and more if lower rises are included. However, they have not been explored very much—in fact, only about half of one percent have been sampled—and almost every expedition to a seamount discovers new species and new information. There is evidence that seamounts can host concentrations of biologic diversity, each with its own unique local ecosystem; they seem to affect oceanic currents, resulting among other things in local concentration of plankton which in turn attracts species that graze on it, and indeed are probably a significant overall factor in biogeography of the oceans. They also may serve as way stations in the migration of whales and other pelagic species. Despite being poorly studied, they are heavily targeted by commercial fishing, including dredging. In addition they are of interest to potential seabed mining. The overall goal of CenSeam is ""to determine the role of seamounts in the biogeography, biodiversity, productivity, and evolution of marine organisms, and to evaluate the effects of human exploitation on seamounts."" To this effect, the group organizes and contributes to various research efforts about seamount biodiversity. Specifically, the project aims to act as a standardized scaffold for future studies and samplings, citing inefficiency and incompatibility between individual research efforts in the past. To give a scale of their mission, there are an estimated 100,000 seamounts in the ocean, but only 350 of them have been sampled, and only about 100 sampled thoroughly. Althoug" https://en.wikipedia.org/wiki/Electric%20power%20conversion,"In all fields of electrical engineering, power conversion is the process of converting electric energy from one form to another. A power converter is an electrical or electro-mechanical device for converting electrical energy. A power converter can convert alternating current (AC) into direct current (DC) and vice versa; change the voltage or frequency of the current or do some combination of these. The power converter can be as simple as a transformer or it can be a far more complex system, such as a resonant converter. The term can also refer to a class of electrical machinery that is used to convert one frequency of alternating current into another. Power conversion systems often incorporate redundancy and voltage regulation. Power converters are classified based on the type of power conversion they do. One way of classifying power conversion systems is according to whether the input and output are alternating current or direct current. Finally, the task of all power converters is to ""process and control the flow of electrical energy by supplying voltages and currents in a form that is optimally suited for user loads"". DC power conversion DC to DC The following devices can convert DC to DC: Linear regulator Voltage regulator Motor–generator Rotary converter Switched-mode power supply DC to AC The following devices can convert DC to AC: Power inverter Motor–generator Rotary converter Switched-mode power supply Chopper (electronics) AC power conversion AC to DC The following devices can convert AC to DC: Rectifier Mains power supply unit (PSU) Motor–generator Rotary converter Switched-mode power supply AC to AC The following devices can convert AC to AC: Transformer or autotransformer Voltage converter Voltage regulator Cycloconverter Variable-frequency transformer Motor–generator Rotary converter Switched-mode power supply Other systems There are also devices and methods to convert between power systems designed for single and three-phase operation. Th" https://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon%20interpolation%20formula,"The Whittaker–Shannon interpolation formula or sinc interpolation is a method to construct a continuous-time bandlimited function from a sequence of real numbers. The formula dates back to the works of E. Borel in 1898, and E. T. Whittaker in 1915, and was cited from works of J. M. Whittaker in 1935, and in the formulation of the Nyquist–Shannon sampling theorem by Claude Shannon in 1949. It is also commonly called Shannon's interpolation formula and Whittaker's interpolation formula. E. T. Whittaker, who published it in 1915, called it the Cardinal series. Definition Given a sequence of real numbers, x[n], the continuous function (where ""sinc"" denotes the normalized sinc function) has a Fourier transform, X(f), whose non-zero values are confined to the region |f| ≤ 1/(2T).  When the parameter T has units of seconds, the bandlimit, 1/(2T), has units of cycles/sec (hertz). When the x[n] sequence represents time samples, at interval T, of a continuous function, the quantity fs = 1/T is known as the sample rate, and fs/2 is the corresponding Nyquist frequency. When the sampled function has a bandlimit, B, less than the Nyquist frequency, x(t) is a perfect reconstruction of the original function. (See Sampling theorem.) Otherwise, the frequency components above the Nyquist frequency ""fold"" into the sub-Nyquist region of X(f), resulting in distortion. (See Aliasing.) Equivalent formulation: convolution/lowpass filter The interpolation formula is derived in the Nyquist–Shannon sampling theorem article, which points out that it can also be expressed as the convolution of an infinite impulse train with a sinc function: This is equivalent to filtering the impulse train with an ideal (brick-wall) low-pass filter with gain of 1 (or 0 dB) in the passband. If the sample rate is sufficiently high, this means that the baseband image (the original signal before sampling) is passed unchanged and the other images are removed by the brick-wall filter. Convergence The i" https://en.wikipedia.org/wiki/Prefix%20delegation,"IP networks are divided logically into subnetworks. Computers in the same subnetwork have the same address prefix. For example, in a typical home network with legacy Internet Protocol version 4, the network prefix would be something like 192.168.1.0/24, as expressed in CIDR notation. With IPv4, commonly home networks use private addresses (defined in ) that are non-routable on the public Internet and use address translation to convert to routable addresses when connecting to hosts outside the local network. Business networks typically had manually provisioned subnetwork prefixes. In IPv6 global addresses are used end-to-end, so even home networks may need to distribute public, routable IP addresses to hosts. Since it would not be practical to manually provision networks at scale, in IPv6 networking, DHCPv6 prefix delegation is used to assign a network address prefix and automate configuration and provisioning of the public routable addresses for the network. The way this works for example in the case of a home network is that the home router uses DHCPv6 protocol to request a network prefix from the ISP's DHCPv6 server. Once assigned, the ISP routes this network to the customer's home router and the home router starts advertising the new addresses to hosts on the network, either via SLAAC or using DHCPv6. DHCPv6 Prefix Delegation is supported by most ISPs who provide native IPv6 for consumers on fixed networks. Prefix delegation is generally not supported on cellular networks, for example LTE or 5G. Most cellular networks route a fixed /64 prefix to the subscriber. Personal hotspots may still provide IPv6 access to hosts on the network by using a different technique called Proxy Neighbor Discovery or using the technique described in . One of the reasons why cellular networks may not yet support prefix delegation is that the operators want to use prefixes they can aggregate to a single route. To solve this, defines an optional mechanism and the related DHCPv6 opt" https://en.wikipedia.org/wiki/Multiplicative%20noise,"In signal processing, the term multiplicative noise refers to an unwanted random signal that gets multiplied into some relevant signal during capture, transmission, or other processing. An important example is the speckle noise commonly observed in radar imagery. Examples of multiplicative noise affecting digital photographs are proper shadows due to undulations on the surface of the imaged objects, shadows cast by complex objects like foliage and Venetian blinds, dark spots caused by dust in the lens or image sensor, and variations in the gain of individual elements of the image sensor array." https://en.wikipedia.org/wiki/Hardware%20watermarking,"Hardware watermarking, also known as IP core watermarking is the process of embedding covert marks as design attributes inside a hardware or IP core design itself. Hardware Watermarking can represent watermarking of either DSP Cores (widely used in consumer electronics devices) or combinational/sequential circuits. Both forms of Hardware Watermarking are very popular. In DSP Core Watermarking a secret mark is embedded within the logic elements of the DSP Core itself. DSP Core Watermark usually implants this secret mark in the form of a robust signature either in the RTL design or during High Level Synthesis (HLS) design. The watermarking process of a DSP Core leverages on the High Level Synthesis framework and implants a secret mark in one (or more) of the high level synthesis phases such as scheduling, allocation and binding. DSP Core Watermarking is performed to protect a DSP core from hardware threats such as IP piracy, forgery and false claim of ownership. Some examples of DSP cores are FIR filter, IIR filter, FFT, DFT, JPEG, HWT etc. Few of the most important properties of a DSP core watermarking process are as follows: (a) Low embedding cost (b) Secret mark (c) Low creation time (d) Strong tamper tolerance (e) Fault tolerance. Process of hardware watermarking Hardware or IP core watermarking in the context of DSP/Multimedia Cores are significantly different from watermarking of images/digital content. IP Cores are usually complex in size and nature and thus require highly sophisticated mechanisms to implant signatures within their design without disturbing the functionality. Any small change in the functionality of the IP core renders the hardware watermarking process futile. Such is the sensitivity of this process. Hardware Watermarking can be performed in two ways: (a) Single-phase watermarking, (b) Multi-phase watermarking. Single-phase watermarking process As the name suggests, in single-phase watermarking process the secret marks in the form of " https://en.wikipedia.org/wiki/Center%20for%20Advancing%20Electronics%20Dresden,"The Center for Advancing Electronics Dresden (cfaed) of the Technische Universität Dresden is part of the Excellence Initiative of German universities. The cluster of excellence for microelectronics is funded from 2012 to 2017 by the German Research Community (DFG) and unites about 60 Investigators and their teams from 11 institutions to act jointly towards reaching the Cluster's ambitious aims. The coordinator is Prof. Dr.-Ing. Gerhard Fettweis, Chair of Mobile Communication Systems. The cluster brings together the teams from two universities and several research institutes in Saxony: Technische Universität Dresden, Technische Universität Chemnitz, Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Leibniz Institute for Polymer Research Dresden e.V. (IPF), Leibniz Institute for Solid State and Materials Research Dresden (IFW), Max Planck Institute of Molecular Cell Biology and Genetics (MPI-CBG), Max Planck Institute for the Physics of Complex Systems (MPI-PKS), Nanoelectronics Materials Laboratory gGmbH (NaMLab), Fraunhofer Institute for Electronic Nano Systems (Fraunhofer ENAS), Fraunhofer Institute of Ceramic Technologies and Systems (Fraunhofer IKTS) and Kurt Schwabe Institute for Measuring and Sensor Technology Meinsberg e.V. (KSI). About 300 scientists from more than 20 different countries are working in nine research paths to investigate completely new technologies for electronic information processing which overcome the limits of today's predominant CMOS technology. Position and institutional building One of the scientific buildings, as well as the organizational headquarters, of the cfaed is situated in Dresden-Plauen, Würzburger Straße 46. In May 2015, construction works for the new cfaed building commenced at the campus of TU Dresden. The building is due for completion in late 2017 and it will host new laboratories, seminar rooms, and offices. History The initial proposal for cfaed as a Cluster of Excellence was submitted to the DFG in August 2011. On July" https://en.wikipedia.org/wiki/Phase%20margin,"In electronic amplifiers, the phase margin (PM) is the difference between the phase lag (< 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i.e. unity gain, or that the output signal has the same amplitude as the input. . For example, if the amplifier's open-loop gain crosses 0 dB at a frequency where the phase lag is -135°, then the phase margin of this feedback system is -135° -(-180°) = 45°. See Bode plot#Gain margin and phase margin for more details. Theory Typically the open-loop phase lag (relative to input, < 0) varies with frequency, progressively increasing to exceed 180°, at which frequency the output signal becomes inverted, or antiphase in relation to the input. The PM will be positive but decreasing at frequencies less than the frequency at which inversion sets in (at which PM = 0), and PM is negative (PM < 0) at higher frequencies. In the presence of negative feedback, a zero or negative PM at a frequency where the loop gain exceeds unity (1) guarantees instability. Thus positive PM is a ""safety margin"" that ensures proper (non-oscillatory) operation of the circuit. This applies to amplifier circuits as well as more generally, to active filters, under various load conditions (e.g. reactive loads). In its simplest form, involving ideal negative feedback voltage amplifiers with non-reactive feedback, the phase margin is measured at the frequency where the open-loop voltage gain of the amplifier equals the desired closed-loop DC voltage gain. More generally, PM is defined as that of the amplifier and its feedback network combined (the ""loop"", normally opened at the amplifier input), measured at a frequency where the loop gain is unity, and prior to the closing of the loop, through tying the output of the open loop to the input source, in such a way as to subtract from it. In the above loop-gain definition, it is assumed that the amplifier input presents zero load. To make this work for non-zero-load input, " https://en.wikipedia.org/wiki/Fetal%20pig,"Fetal pigs are unborn pigs used in elementary as well as advanced biology classes as objects for dissection. Pigs, as a mammalian species, provide a good specimen for the study of physiological systems and processes due to the similarities between many pig and human organs. Use in biology labs Along with frogs and earthworms, fetal pigs are among the most common animals used in classroom dissection. There are several reasons for this, the main reason being that pigs, like humans, are mammals. Shared traits include common hair, mammary glands, live birth, similar organ systems, metabolic levels, and basic body form. They also allow for the study of fetal circulation, which differs from that of an adult. Secondly, fetal pigs are easy to obtain because they are by-products of the pork industry. Fetal pigs are the unborn piglets of sows that were killed by the meat-packing industry. These pigs are not bred and killed for this purpose, but are extracted from the deceased sow’s uterus. Fetal pigs not used in classroom dissections are often used in fertilizer or simply discarded. Thirdly, fetal pigs are cheap, which is an essential component for dissection use by schools. They can be ordered for about $30 at biological product companies. Fourthly, fetal pigs are easy to dissect because of their soft tissue and incompletely developed bones that are still made of cartilage. In addition, they are relatively large with well-developed organs that are easily visible. As long as the pork industry exists, fetal pigs will be relatively abundant, making them the prime choice for classroom dissections. Alternatives Several peer-reviewed comparative studies have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection. A systematic review concluded that students taught using non-animal m" https://en.wikipedia.org/wiki/Wet-folding,"Wet-folding is an origami technique developed by Akira Yoshizawa that employs water to dampen the paper so that it can be manipulated more easily. This process adds an element of sculpture to origami, which is otherwise purely geometric. Wet-folding is used very often by professional folders for non-geometric origami, such as animals. Wet-folders usually employ thicker paper than what would usually be used for normal origami, to ensure that the paper does not tear. One of the most prominent users of the wet-folding technique is Éric Joisel, who specialized in origami animals, humans, and legendary creatures. He also created origami masks. Other folders who practice this technique are Robert J. Lang and John Montroll. The process of wet-folding allows a folder to preserve a curved shape more easily. It also reduces the number of wrinkles substantially. Wet-folding allows for increased rigidity and structure due to a process called sizing. Sizing is a water-soluble adhesive, usually methylcellulose or methyl acetate, that may be added during the manufacture of the paper. As the paper dries, the chemical bonds of the fibers of the paper tighten together which results in a crisper and stronger sheet. In order to moisten the paper, an artist typically wipes the sheet with a dampened cloth. The amount of moisture added to the paper is crucial because too little will cause the paper to dry quickly and spring back into its original position before the folding is complete, while too much will either fray the edges of the paper or will cause the paper to split at high-stress points. Notes and references See also Papier-mâché External links Mini-documentary about Joisel at YouTube An illustrated introduction to wet-folding Origami Mathematics and art" https://en.wikipedia.org/wiki/List%20of%20thermodynamic%20properties,"In thermodynamics, a physical property is any property that is measurable, and whose value describes a state of a physical system. Thermodynamic properties are defined as characteristic features of a system, capable of specifying the system's state. Some constants, such as the ideal gas constant, , do not describe the state of a system, and so are not properties. On the other hand, some constants, such as (the freezing point depression constant, or cryoscopic constant), depend on the identity of a substance, and so may be considered to describe the state of a system, and therefore may be considered physical properties. ""Specific"" properties are expressed on a per mass basis. If the units were changed from per mass to, for example, per mole, the property would remain as it was (i.e., intensive or extensive). Regarding work and heat Work and heat are not thermodynamic properties, but rather process quantities: flows of energy across a system boundary. Systems do not contain work, but can perform work, and likewise, in formal thermodynamics, systems do not contain heat, but can transfer heat. Informally, however, a difference in the energy of a system that occurs solely because of a difference in its temperature is commonly called heat, and the energy that flows across a boundary as a result of a temperature difference is ""heat"". Altitude (or elevation) is usually not a thermodynamic property. Altitude can help specify the location of a system, but that does not describe the state of the system. An exception would be if the effect of gravity need to be considered in order to describe a state, in which case altitude could indeed be a thermodynamic property. See also Conjugate variables Dimensionless numbers Intensive and extensive properties Thermodynamic databases for pure substances Thermodynamic variable" https://en.wikipedia.org/wiki/Networked%20music%20performance,"A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. Participants may be connected by ""high fidelity multichannel audio and video links"" as well as MIDI data connections and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. Remote audience members and possibly a conductor may also participate. History One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage. The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.” In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music. The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet.The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over ethernet to distributed locations. However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”. In 1998, there was a three-way audio-only performan" https://en.wikipedia.org/wiki/Principle%20of%20least%20privilege,"In information security, computer science, and other fields, the principle of least privilege (PoLP), also known as the principle of minimal privilege (PoMP) or the principle of least authority (PoLA), requires that in a particular abstraction layer of a computing environment, every module (such as a process, a user, or a program, depending on the subject) must be able to access only the information and resources that are necessary for its legitimate purpose. Details The principle means giving any users account or processes only those privileges which are essentially vital to perform its intended functions. For example, a user account for the sole purpose of creating backups does not need to install software: hence, it has rights only to run backup and backup-related applications. Any other privileges, such as installing new software, are blocked. The principle applies also to a personal computer user who usually does work in a normal user account, and opens a privileged, password protected account only when the situation absolutely demands it. When applied to users, the terms least user access or least-privileged user account (LUA) are also used, referring to the concept that all user accounts should run with as few privileges as possible, and also launch applications with as few privileges as possible. The principle of (least privilege) is widely recognized as an important design consideration towards enhancing and giving a much needed 'Boost' to the protection of data and functionality from faults (fault tolerance) and malicious behavior. Benefits of the principle include: Intellectual Security. When code is limited in the scope of changes it can make to a system, it is easier to test its possible actions and interactions with other security targeted applications. In practice for example, applications running with restricted rights will not have access to perform operations that could crash a machine, or adversely affect other applications running on the " https://en.wikipedia.org/wiki/Log-spectral%20distance,"The log-spectral distance (LSD), also referred to as log-spectral distortion or root mean square log-spectral distance, is a distance measure between two spectra. The log-spectral distance between spectra and is defined as p-norm: where and are power spectra. Unlike the Itakura–Saito distance, the log-spectral distance is symmetric. In speech coding, log spectral distortion for a given frame is defined as the root mean square difference between the original LPC log power spectrum and the quantized or interpolated LPC log power spectrum. Usually the average of spectral distortion over a large number of frames is calculated and that is used as the measure of performance of quantization or interpolation. Meaning When measuring the distortion between signals, the scale or temporality/spatiality of the signals can have different levels of significance to the distortion measures. To incorporate the proper level of significance, the signals can be transformed into a different domain. When the signals are transformed into the spectral domain with transformation methods such as Fourier transform and DCT, the spectral distance is the measure to compare the transformed signals. LSD incorporates the logarithmic characteristics of the power spectra, and it becomes effective when the processing task of the power spectrum also has logarithmic characteristics, e.g. human listening to the sound signal with different levels of loudness. Moreover, LSD is equal to the cepstral distance which is the distance between the signals' cepstrum when the p-numbers are the same by Parseval's theorem. Other Representations As LSD is in the form of p-norm, it can be represented with different p-numbers and log scales. For instance, when it is expressed in dB with L2 norm, it is defined as: . When it is represented in the discrete space, it is defined as: where and are power spectra in discrete space. See also Itakura–Saito distance" https://en.wikipedia.org/wiki/CAN%20bus,"A Controller Area Network (CAN bus) is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other's applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles to save on copper, but it can also be used in many other contexts. For each device, the data in a frame is transmitted serially but in such a way that if more than one device transmits at the same time, the highest priority device can continue while the others back off. Frames are received by all devices, including by the transmitting device. History Development of the CAN bus started in 1983 at Robert Bosch GmbH. The protocol was officially released in 1986 at the Society of Automotive Engineers (SAE) conference in Detroit, Michigan. The first CAN controller chips were introduced by Intel in 1987, and shortly thereafter by Philips. Released in 1991, the Mercedes-Benz W140 was the first production vehicle to feature a CAN-based multiplex wiring system. Bosch published several versions of the CAN specification. The latest is CAN 2.0, published in 1991. This specification has two parts. Part A is for the standard format with an 11-bit identifier, and part B is for the extended format with a 29-bit identifier. A CAN device that uses 11-bit identifiers is commonly called CAN 2.0A, and a CAN device that uses 29-bit identifiers is commonly called CAN 2.0B. These standards are freely available from Bosch along with other specifications and white papers. In 1993, the International Organization for Standardization (ISO) released CAN standard ISO 11898, which was later restructured into two parts: ISO 11898-1 which covers the data link layer, and ISO 11898-2 which covers the CAN physical layer for high-speed CAN. ISO 11898-3 was released later and covers the CAN physical layer for low-speed, fault-tolerant CAN. The physical layer standards ISO 11898-2 and ISO 11898-3 are not part of the Bosch C" https://en.wikipedia.org/wiki/Nova%20classification,"The Nova classification (, 'new classification') is a framework for grouping edible substances based on the extent and purpose of food processing applied to them. Researchers at the University of São Paulo, Brazil, proposed the system in 2009. Nova classifies food into four groups: Unprocessed or minimally processed foods Processed culinary ingredients Processed foods Ultra-processed foods The system has been used worldwide in nutrition and public health research, policy, and guidance as a tool for understanding the health implications of different food products. History The Nova classification grew out of the research of Carlos Augusto Monteiro. Born in 1948 into a family straddling the divide between poverty and relative affluence in Brazil, Monteiro's journey began as the first member of his family to attend university. His early research in the late 1970s focused on malnutrition, reflecting the prevailing emphasis in nutrition science of the time. In the mid-1990s, Monteiro observed a significant shift in Brazil's dietary landscape marked by a rise in obesity rates among economically disadvantaged populations, while more affluent areas saw declines. This transformation led him to explore dietary patterns holistically, rather than focusing solely on individual nutrients. Employing statistical methods, Monteiro identified two distinct eating patterns in Brazil: one rooted in traditional foods like rice and beans and another characterized by the consumption of highly processed products. The classification's name is from the title of the original scientific article in which it was published, 'A new classification of foods' (). The idea of applying this as the classification's name is credited to Jean-Claude Moubarac of the Université de Montréal. The name is often styled in capital letters, NOVA, but it is not an acronym. Recent scientific literature leans towards writing the name as Nova, including papers written with Monteiro's involvement. Nova food pr" https://en.wikipedia.org/wiki/Van%20der%20Waerden%20notation,"In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden. Dotted indices Undotted indices (chiral indices) Spinors with lower undotted indices have a left-handed chirality, and are called chiral indices. Dotted indices (anti-chiral indices) Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices. Without the indices, i.e. ""index free notation"", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated. Hatted indices Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if then a spinor in the chiral basis is represented as where In this notation the Dirac adjoint (also called the Dirac conjugate) is See also Dirac equation Infeld–Van der Waerden symbols Lorentz transformation Pauli equation Ricci calculus Notes" https://en.wikipedia.org/wiki/SUMIT,"Stackable Unified Module Interconnect Technology (SUMIT) is a connector between expansion buses independent of motherboard form factor. Boards featuring SUMIT connectors are usually used in ""stacks"" where one board sits on top of another. It was published by the Small Form Factor Special Interest Group. Details Two identical connectors carry the signals specified by the standard. Commonly referred to as SUMIT A & SUMIT B, designers have the option of designing with either both SUMIT A and B, or just SUMIT A. The signals carried within each connector is as follows: SUMIT A: One PCI-Express x1 lane Four USB 2.0 ExpressCard LPC SPI/uWire SMBus/I2C Bus SUMIT B: One PCI-Express x1 lane One PCI-Express x4 or four more PCI-Express x1 lanes As of August 2009, three board form factors used the SUMIT connectors for embedded applications: ISM or SUMIT-ISM [90mm × 96mm], Pico-ITXe [72mm × 100mm], and Pico-I/O [60mm × 72mm]. See also VMEbus VPX CompactPCI PC/104 Pico-ITXe" https://en.wikipedia.org/wiki/Catalog%20of%20articles%20in%20probability%20theory,"This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents. Core probability: selected topics Probability theory Basic notions (bsc) Random variable Continuous probability distribution / (1:C) Cumulative distribution function / (1:DCR) Discrete probability distribution / (1:D) Independent and identically-distributed random variables / (FS:BDCR) Joint probability distribution / (F:DC) Marginal distribution / (2F:DC) Probability density function / (1:C) Probability distribution / (1:DCRG) Probability distribution function Probability mass function / (1:D) Sample space Instructive examples (paradoxes) (iex) Berkson's paradox / (2:B) Bertrand's box paradox / (F:B) Borel–Kolmogorov paradox / cnd (2:CM) Boy or Girl paradox / (2:B) Exchange paradox / (2:D) Intransitive dice Monty Hall problem / (F:B) Necktie paradox Simpson's paradox Sleeping Beauty problem St. Petersburg paradox / mnt (1:D) Three Prisoners problem Two envelopes problem Moments (mnt) Expected value / (12:DCR) Canonical correlation / (F:R) Carleman's condition / anl (1:R) Central moment / (1:R) Coefficient of variation / (1:R) Correlation / (2:R) Correlation function / (U:R) Covariance / (2F:R) (1:G) Covariance function / (U:R) Covariance matrix / (F:R) Cumulant / (12F:DCR) Factorial moment / (1:R) Factorial moment generating function / anl (1:R) Fano factor Geometric standard deviation / (1:R) Hamburger moment problem / anl (1:R) Hausdorff moment problem / anl (1:R) Isserlis Gaussian moment theorem / Gau Jensen's inequality / (1:DCR" https://en.wikipedia.org/wiki/Noise%20%28signal%20processing%29,"In signal processing, noise is a general term for unwanted (and, in general, unknown) modifications that a signal may suffer during capture, storage, transmission, processing, or conversion. Sometimes the word is also used to mean signals that are random (unpredictable) and carry no useful information; even if they are not interfering with other signals or may have been introduced intentionally, as in comfort noise. Noise reduction, the recovery of the original signal from the noise-corrupted one, is a very common goal in the design of signal processing systems, especially filters. The mathematical limits for noise removal are set by information theory. Types of noise Signal processing noise can be classified by its statistical properties (sometimes called the ""color"" of the noise) and by how it modifies the intended signal: Additive noise, gets added to the intended signal White noise Additive white Gaussian noise Black noise Gaussian noise Pink noise or flicker noise, with 1/f power spectrum Brownian noise, with 1/f2 power spectrum Contaminated Gaussian noise, whose PDF is a linear mixture of Gaussian PDFs Power-law noise Cauchy noise Multiplicative noise, multiplies or modulates the intended signal Quantization error, due to conversion from continuous to discrete values Poisson noise, typical of signals that are rates of discrete events Shot noise, e.g. caused by static electricity discharge Transient noise, a short pulse followed by decaying oscillations Burst noise, powerful but only during short intervals Phase noise, random time shifts in a signal Noise in specific kinds of signals Noise may arise in signals of interest to various scientific and technical fields, often with specific features: Noise (audio), such as ""hiss"" or ""hum"", in audio signals Background noise, due to spurious sounds during signal capture Comfort noise, added to voice communications to fill silent gaps Electromagnetically induced noise, audible noise due to el" https://en.wikipedia.org/wiki/Quasi-analog%20signal,"In telecommunication, a quasi-analog signal is a digital signal that has been converted to a form suitable for transmission over a specified analog channel. The specification of the analog channel should include frequency range, bandwidth, signal-to-noise ratio, and envelope delay distortion. When quasi-analog form of signaling is used to convey message traffic over dial-up telephone systems, it is often referred to as voice-data. A modem may be used for the conversion process." https://en.wikipedia.org/wiki/High-%CE%BA%20dielectric,"In the semiconductor industry, the term high-κ dielectric refers to a material with a high dielectric constant (κ, kappa), as compared to silicon dioxide. High-κ dielectrics are used in semiconductor manufacturing processes where they are usually used to replace a silicon dioxide gate dielectric or another dielectric layer of a device. The implementation of high-κ gate dielectrics is one of several strategies developed to allow further miniaturization of microelectronic components, colloquially referred to as extending Moore's Law. Sometimes these materials are called ""high-k"" (pronounced ""high kay""), instead of ""high-κ"" (high kappa). Need for high-κ materials Silicon dioxide () has been used as a gate oxide material for decades. As metal–oxide–semiconductor field-effect transistors (MOSFETs) have decreased in size, the thickness of the silicon dioxide gate dielectric has steadily decreased to increase the gate capacitance (per unit area) and thereby drive current (per device width), raising device performance. As the thickness scales below 2 nm, leakage currents due to tunneling increase drastically, leading to high power consumption and reduced device reliability. Replacing the silicon dioxide gate dielectric with a high-κ material allows increased gate capacitance without the associated leakage effects. First principles The gate oxide in a MOSFET can be modeled as a parallel plate capacitor. Ignoring quantum mechanical and depletion effects from the Si substrate and gate, the capacitance of this parallel plate capacitor is given by where is the capacitor area is the relative dielectric constant of the material (3.9 for silicon dioxide) is the permittivity of free space is the thickness of the capacitor oxide insulator Since leakage limitation constrains further reduction of , an alternative method to increase gate capacitance is to alter κ by replacing silicon dioxide with a high-κ material. In such a scenario, a thicker gate oxide layer might " https://en.wikipedia.org/wiki/Animal%20efficacy%20rule,"The FDA animal efficacy rule (also known as animal rule) applies to development and testing of drugs and biologicals to reduce or prevent serious or life-threatening conditions caused by exposure to lethal or permanently disabling toxic agents (chemical, biological, radiological, or nuclear substances), where human efficacy trials are not feasible or ethical. The animal efficacy rule was finalized by the FDA and authorized by the United States Congress in 2002, following the September 11 attacks and concerns regarding bioterrorism. Summary The FDA can rely on evidence from animal studies to provide substantial evidence of product effectiveness if: There is a reasonably well-understood mechanism for the toxicity of the agent and its amelioration or prevention by the product; The effect is demonstrated in either: More than one animal species expected to react with a response predictive for humans; or One well-characterized animal species model (adequately evaluated for its responsiveness in humans) for predicting the response in humans. The animal study endpoint is clearly related to the desired benefit in humans; and Data or information on the pharmacokinetics and pharmacodynamics of the product or other relevant data or information in animals or humans is sufficiently well understood to allow selection of an effective dose in humans, and it is, therefore, reasonable to expect the effectiveness of the product in animals to be a reliable indicator of its effectiveness in humans. FDA published a Guidance for Industry on the Animal Rule in October 2015." https://en.wikipedia.org/wiki/What%20Is%20Life%3F,"What Is Life? The Physical Aspect of the Living Cell is a 1944 science book written for the lay reader by physicist Erwin Schrödinger. The book was based on a course of public lectures delivered by Schrödinger in February 1943, under the auspices of the Dublin Institute for Advanced Studies, where he was Director of Theoretical Physics, at Trinity College, Dublin. The lectures attracted an audience of about 400, who were warned ""that the subject-matter was a difficult one and that the lectures could not be termed popular, even though the physicist’s most dreaded weapon, mathematical deduction, would hardly be utilized."" Schrödinger's lecture focused on one important question: ""how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?"" In the book, Schrödinger introduced the idea of an ""aperiodic crystal"" that contained genetic information in its configuration of covalent chemical bonds. In the 1950s, this idea stimulated enthusiasm for discovering the chemical basis of genetic inheritance. Although the existence of some form of hereditary information had been hypothesized since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture. In retrospect, Schrödinger's aperiodic crystal can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material. In 1953, James D. Watson and Francis Crick jointly proposed the double helix structure of deoxyribonucleic acid (DNA) on the basis of, amongst other theoretical insights, X-ray diffraction experiments conducted by Rosalind Franklin. They both credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each independently acknowledged the book as a source of inspiration for their initial researches. Background The book, published i" https://en.wikipedia.org/wiki/Programmable%20logic%20device,"A programmable logic device (PLD) is an electronic component used to build reconfigurable digital circuits. Unlike digital logic constructed using discrete logic gates with fixed functions, a PLD has an undefined function at the time of manufacture. Before the PLD can be used in a circuit it must be programmed to implement the desired function. Compared to fixed logic devices, programmable logic devices simplify the design of complex logic and may offer superior performance. Unlike for microprocessors, programming a PLD changes the connections made between the gates in the device. PLDs can broadly be categorised into, in increasing order of complexity, Simple Programmable Logic Devices (SPLDs), comprising programmable array logic, programmable logic array and generic array logic; Complex Programmable Logic Devices (CPLDs) and Field-Programmable Gate Arrays (FPGAs). History In 1969, Motorola offered the XC157, a mask-programmed gate array with 12 gates and 30 uncommitted input/output pins. In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip flop for memory. TI coined the term Programmable Logic Array (PLA) for this device. In 1971, General Electric Company (GE) was developing a programmable logic device based on the new Programmable Read-Only Memory (PROM) technology. This experimental device improved on IBM's ROAM by allowing multilevel logic. Intel had just introduced the floating-gate UV erasable PROM so the researcher at GE incorporated that technology. The GE device was the first erasable PLD ever developed, predating the Altera EPLD by over a decade. GE obtained several early patents on programmable logic devices. In 1973 National Semiconductor introduced a mask-programmable PLA device (DM7575) with 14 inputs and 8 outputs with no m" https://en.wikipedia.org/wiki/List%20of%20gravitational%20wave%20observations,"This page contains a list of observed/candidate gravitational wave events. Origin and nomenclature Direct observation of gravitational waves, which commenced with the detection of an event by LIGO in 2015, plays a key role in gravitational wave astronomy. LIGO has been involved in all subsequent detections to date, with Virgo joining in August 2017. Joint observation runs of LIGO and VIRGO, designated ""O1, O2, etc."" span many months, with months of maintenance and upgrades in-between designed to increase the instruments sensitivity and range. Within these run periods, the instruments are capable of detecting gravitational waves. The first run, O1, ran from September 12, 2015, to January 19, 2016, and succeeded in its first gravitational wave detection. O2 ran for a greater duration, from November 30, 2016, to August 25, 2017. O3 began on April 1, 2019, which was briefly suspended on September 30, 2019, for maintenance and upgrades, thus O3a. O3b marks resuming of the run and began on November 1, 2019. Due to the COVID-19 pandemic O3 was forced to end prematurely. O4 is planned to begin on May 24, 2023; initially planned for March, the project needed more time to stabilize the instruments. The O4 observing run has been extended from one year to 18 months, following plans to make further upgrades for the O5 run. Updated observing plans are published on the official website, containing the latest information on these runs. Gravitational wave events are named starting with the prefix GW, while observations that trigger an event alert but have not (yet) been confirmed are named starting with the prefix S. Six digits then indicate the date of the event, with the two first digits representing the year, the two middle digits the month and two final digits the day of observation. This is similar to the systematic naming for other kinds of astronomical event observations, such as those of gamma-ray bursts. Probable detections that are not confidently identified as gravi" https://en.wikipedia.org/wiki/Eukaryote,"The eukaryotes () constitute the domain of Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, and many unicellular organisms are eukaryotes. They constitute a major group of life forms alongside the two groups of prokaryotes: the Bacteria and the Archaea. Eukaryotes represent a small minority of the number of organisms, but due to their generally much larger size, their collective global biomass is much larger than that of prokaryotes. The eukaryotes seemingly emerged in the Archaea, within the Asgard archaea. This implies that there are only two domains of life, Bacteria and Archaea, with eukaryotes incorporated among the Archaea. Eukaryotes emerged approximately 2.2 billion years ago, during the Proterozoic eon, likely as flagellated cells. The leading evolutionary theory is they were created by symbiogenesis between an anaerobic Asgard archaean and an aerobic proteobacterium, which formed the mitochondria. A second episode of symbiogenesis with a cyanobacterium created the plants, with chloroplasts. The oldest-known eukaryote fossils, multicellular planktonic organisms belonging to the Gabonionta, were discovered in Gabon in 2023, dating back to 2.1 billion years ago. Eukaryotic cells contain membrane-bound organelles such as the nucleus, the endoplasmic reticulum, and the Golgi apparatus. Eukaryotes may be either unicellular or multicellular. In comparison, prokaryotes are typically unicellular. Unicellular eukaryotes are sometimes called protists. Eukaryotes can reproduce both asexually through mitosis and sexually through meiosis and gamete fusion (fertilization). Diversity Eukaryotes are organisms that range from microscopic single cells, such as picozoans under 3 micrometres across, to animals like the blue whale, weighing up to 190 tonnes and measuring up to long, or plants like the coast redwood, up to tall. Many eukaryotes are unicellular; the informal grouping called protists includes many of these, with some" https://en.wikipedia.org/wiki/List%20of%20functional%20analysis%20topics,"This is a list of functional analysis topics. See also: Glossary of functional analysis. Hilbert space Functional analysis, classic results Operator theory Banach space examples Lp space Hardy space Sobolev space Tsirelson space ba space Real and complex algebras Topological vector spaces Amenability Amenable group Von Neumann conjecture Wavelets Quantum theory See also list of mathematical topics in quantum theory Probability Free probability Bernstein's theorem Non-linear Fixed-point theorems in infinite-dimensional spaces History Stefan Banach (1892–1945) Hugo Steinhaus (1887–1972) John von Neumann (1903-1957) Alain Connes (born 1947) Earliest Known Uses of Some of the Words of Mathematics: Calculus & Analysis Earliest Known Uses of Some of the Words of Mathematics: Matrices and Linear Algebra Functional analysis" https://en.wikipedia.org/wiki/Kingdom%20%28biology%29,"In biology, a kingdom is the second highest taxonomic rank, just below domain. Kingdoms are divided into smaller groups called phyla. Traditionally, some textbooks from the United States and Canada used a system of six kingdoms of eukaryotes (Animalia, Plantae, Fungi, Protista, Archaea/Archaebacteria, and Bacteria or Eubacteria), while textbooks in other parts of the world, such as the United Kingdom, Pakistan, Bangladesh, India, Greece, Brazil use five kingdoms only (Animalia, Plantae, Fungi, Protista and Monera). Some recent classifications based on modern cladistics have explicitly abandoned the term kingdom, noting that some traditional kingdoms are not monophyletic, meaning that they do not consist of all the descendants of a common ancestor. The terms flora (for plants), fauna (for animals), and, in the 21st century, funga (for fungi) are also used for life present in a particular region or time. Definition and associated terms When Carl Linnaeus introduced the rank-based system of nomenclature into biology in 1735, the highest rank was given the name ""kingdom"" and was followed by four other main or principal ranks: class, order, genus and species. Later two further main ranks were introduced, making the sequence kingdom, phylum or division, class, order, family, genus and species. In 1990, the rank of domain was introduced above kingdom. Prefixes can be added so subkingdom (subregnum) and infrakingdom (also known as infraregnum) are the two ranks immediately below kingdom. Superkingdom may be considered as an equivalent of domain or empire or as an independent rank between kingdom and domain or subdomain. In some classification systems the additional rank branch (Latin: ramus) can be inserted between subkingdom and infrakingdom, e.g., Protostomia and Deuterostomia in the classification of Cavalier-Smith. History Two kingdoms of life The classification of living things into animals and plants is an ancient one. Aristotle (384–322 BC) classified anima" https://en.wikipedia.org/wiki/Inverter%20%28logic%20gate%29,"In digital logic, an inverter or NOT gate is a logic gate which implements logical negation. It outputs a bit opposite of the bit that is put into it. The bits are typically implemented as two differing voltage levels. Description The NOT gate outputs a zero when given a one, and a one when given a zero. Hence, it inverts its inputs. Colloquially, this inversion of bits is called ""flipping"" bits. As with all binary logic gates, other pairs of symbols such as true and false, or high and low may be used in lieu of one and zero. It is equivalent to the logical negation operator (¬) in mathematical logic. Because it has only one input, it is a unary operation and has the simplest type of truth table. It is also called the complement gate because it produces the ones' complement of a binary number, swapping 0s and 1s. The NOT gate is one of three basic logic gates from which any Boolean circuit may be built up. Together with the AND gate and the OR gate, any function in binary mathematics may be implemented. All other logic gates may be made from these three. The terms ""programmable inverter"" or ""controlled inverter"" do not refer to this gate; instead, these terms refer to the XOR gate because it can conditionally function like a NOT gate. Symbols The traditional symbol for an inverter circuit is a triangle touching a small circle or ""bubble"". Input and output lines are attached to the symbol; the bubble is typically attached to the output line. To symbolize active-low input, sometimes the bubble is instead placed on the input line. Sometimes only the circle portion of the symbol is used, and it is attached to the input or output of another gate; the symbols for NAND and NOR are formed in this way. A bar or overline ( ‾ ) above a variable can denote negation (or inversion or complement) performed by a NOT gate. A slash (/) before the variable is also used. Electronic implementation An inverter circuit outputs a voltage representing the opposite logic-level to" https://en.wikipedia.org/wiki/Iron%20oxide%20nanoparticle,"Iron oxide nanoparticles are iron oxide particles with diameters between about 1 and 100 nanometers. The two main forms are composed of magnetite () and its oxidized form maghemite (γ-). They have attracted extensive interest due to their superparamagnetic properties and their potential applications in many fields (although cobalt and nickel are also highly magnetic materials, they are toxic and easily oxidized) including molecular imaging. Applications of iron oxide nanoparticles include terabit magnetic storage devices, catalysis, sensors, superparamagnetic relaxometry, high-sensitivity biomolecular magnetic resonance imaging, magnetic particle imaging, magnetic fluid hyperthermia, separation of biomolecules, and targeted drug and gene delivery for medical diagnosis and therapeutics. These applications require coating of the nanoparticles by agents such as long-chain fatty acids, alkyl-substituted amines, and diols. They have been used in formulations for supplementation. Structure Magnetite has an inverse spinel structure with oxygen forming a face-centered cubic crystal system. In magnetite, all tetrahedral sites are occupied by and octahedral sites are occupied by both and . Maghemite differs from magnetite in that all or most of the iron is in the trivalent state () and by the presence of cation vacancies in the octahedral sites. Maghemite has a cubic unit cell in which each cell contains 32 oxygen ions, 21 ions and 2 vacancies. The cations are distributed randomly over the 8 tetrahedral and 16 octahedral sites. Magnetic properties Due to its 4 unpaired electrons in 3d shell, an iron atom has a strong magnetic moment. Ions have also 4 unpaired electrons in 3d shell and have 5 unpaired electrons in 3d shell. Therefore, when crystals are formed from iron atoms or ions and they can be in ferromagnetic, antiferromagnetic, or ferrimagnetic states. In the paramagnetic state, the individual atomic magnetic moments are randomly oriented, and the substance" https://en.wikipedia.org/wiki/Index%20of%20combinatorics%20articles," A Abstract simplicial complex Addition chain Scholz conjecture Algebraic combinatorics Alternating sign matrix Almost disjoint sets Antichain Arrangement of hyperplanes Assignment problem Quadratic assignment problem Audioactive decay B Barcode Matrix code QR Code Universal Product Code Bell polynomials Bertrand's ballot theorem Binary matrix Binomial theorem Block design Balanced incomplete block design(BIBD) Symmetric balanced incomplete block design (SBIBD) Partially balanced incomplete block designs (PBIBDs) Block walking Boolean satisfiability problem 2-satisfiability 3-satisfiability Bracelet (combinatorics) Bruck–Chowla–Ryser theorem C Catalan number Cellular automaton Collatz conjecture Combination Combinatorial design Combinatorial number system Combinatorial optimization Combinatorial search Constraint satisfaction problem Conway's Game of Life Cycles and fixed points Cyclic order Cyclic permutation Cyclotomic identity D Data integrity Alternating bit protocol Checksum Cyclic redundancy check Luhn formula Error detection Error-detecting code Error-detecting system Message digest Redundancy check Summation check De Bruijn sequence Deadlock Delannoy number Dining philosophers problem Mutual exclusion Rendezvous problem Derangement Dickson's lemma Dinitz conjecture Discrete optimization Dobinski's formula E Eight queens puzzle Entropy coding Enumeration Algebraic enumeration Combinatorial enumeration Burnside's lemma Erdős–Ko–Rado theorem Euler number F Faà di Bruno's formula Factorial number system Family of sets Faulhaber's formula Fifteen puzzle Finite geometry Finite intersection property G Game theory Combinatorial game theory Combinatorial game theory (history) Combinatorial game theory (pedagogy) Star (game theory) Zero game, fuzzy game Dots and Boxes Impartial game Digital sum Nim Nimber Sprague–Grundy theorem Partizan game Solved board games Col ga" https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance,"Nuclear magnetic resonance (NMR) is a physical phenomenon in which nuclei in a strong constant magnetic field are perturbed by a weak oscillating magnetic field (in the near field) and respond by producing an electromagnetic signal with a frequency characteristic of the magnetic field at the nucleus. This process occurs near resonance, when the oscillation frequency matches the intrinsic frequency of the nuclei, which depends on the strength of the static magnetic field, the chemical environment, and the magnetic properties of the isotope involved; in practical applications with static magnetic fields up to ca. 20 tesla, the frequency is similar to VHF and UHF television broadcasts (60–1000 MHz). NMR results from specific magnetic properties of certain atomic nuclei. Nuclear magnetic resonance spectroscopy is widely used to determine the structure of organic molecules in solution and study molecular physics and crystals as well as non-crystalline materials. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). The most commonly used nuclei are and , although isotopes of many other elements, such as , , and, can be studied by high-field NMR spectroscopy as well. In order to interact with the magnetic field in the spectrometer, the nucleus must have an intrinsic nuclear magnetic moment and angular momentum. This occurs when an isotope has a nonzero nuclear spin, meaning an odd number of protons and/or neutrons (see Isotope). Nuclides with even numbers of both have a total spin of zero and are therefore NMR-inactive. A key feature of NMR is that the resonant frequency of a particular sample substance is usually directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonance frequencies of the sample's nuclei depend on where in the field they are located. Since the resolutio" https://en.wikipedia.org/wiki/Unix%20architecture,"A Unix architecture is a computer operating system system architecture that embodies the Unix philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or similar POSIX IEEE standard. No single published standard describes all Unix architecture computer operating systems — this is in part a legacy of the Unix wars. Description There are many systems which are Unix-like in their architecture. Notable among these are the Linux distributions. The distinctions between Unix and Unix-like systems have been the subject of heated legal battles, and the holders of the UNIX brand, The Open Group, object to ""Unix-like"" and similar terms. For distinctions between SUS branded UNIX architectures and other similar architectures, see Unix-like. Kernel A Unix kernel — the core or key components of the operating system — consists of many kernel subsystems like process management, scheduling, file management, device management, network management, memory management, and dealing with interrupts from hardware devices. Each of the subsystems has some features: Concurrency: As Unix is a multiprocessing OS, many processes run concurrently to improve the performance of the system. Virtual memory (VM): Memory management subsystem implements the virtual memory concept and users need not worry about the executable program size and the RAM size. Paging: It is a technique to minimize the internal as well as the external fragmentation in the physical memory. Virtual file system (VFS): A VFS is a file system used to help the user to hide the different file systems complexities. A user can use the same standard file system related calls to access different file systems. The kernel provides these and other basic services: interrupt and trap handling, separation between user and system space, system calls, scheduling, timer and clock handling, file descriptor management. Features Some key features of the Unix architecture concept are: Unix systems use a central" https://en.wikipedia.org/wiki/List%20of%20fractals%20by%20Hausdorff%20dimension,"According to Benoit Mandelbrot, ""A fractal is by definition a set for which the Hausdorff-Besicovitch dimension strictly exceeds the topological dimension."" Presented here is a list of fractals, ordered by increasing Hausdorff dimension, to illustrate what it means for a fractal to have a low or a high dimension. Deterministic fractals Random and natural fractals See also Fractal dimension Hausdorff dimension Scale invariance Notes and references Further reading External links The fractals on Mathworld Other fractals on Paul Bourke's website Soler's Gallery Fractals on mathcurve.com 1000fractales.free.fr - Project gathering fractals created with various software Fractals unleashed IFStile - software that computes the dimension of the boundary of self-affine tiles Hausdorff Dimension Hausdorff Dimension Mathematics-related lists" https://en.wikipedia.org/wiki/Species%20description,"A species description is a formal scientific description of a newly encountered species, usually in the form of a scientific paper. Its purpose is to give a clear description of a new species of organism and explain how it differs from species that have been described previously or are related. To be considered valid, a species description must follow guidelines established over time. Naming requires adherence to respective codes, for example: in zoology, the International Code of Zoological Nomenclature (ICZN); plants, the International Code of Nomenclature for algae, fungi, and plants (ICN); viruses, the International Committee on Taxonomy of Viruses (ICTV). The species description often contains photographs or other illustrations of type material along with a note on where they are deposited. The publication in which the species is described gives the new species a formal scientific name. Some 1.9 million species have been identified and described, out of some 8.7 million that may actually exist. Millions more have become extinct throughout the existence of life on Earth. Naming process A name of a new species becomes valid (available in zoological terminology) with the date of publication of its formal scientific description. Once the scientist has performed the necessary research to determine that the discovered organism represents a new species, the scientific results are summarized in a scientific manuscript, either as part of a book or as a paper to be submitted to a scientific journal. A scientific species description must fulfill several formal criteria specified by the nomenclature codes, e.g. selection of at least one type specimen. These criteria are intended to ensure that the species name is clear and unambiguous, for example, the International Code of Zoological Nomenclature states that ""Authors should exercise reasonable care and consideration in forming new names to ensure that they are chosen with their subsequent users in mind and that, as f" https://en.wikipedia.org/wiki/The%20Seven%20Pillars%20of%20Life,"The Seven Pillars of Life are the essential principles of life described by Daniel E. Koshland in 2002 in order to create a universal definition of life. One stated goal of this universal definition is to aid in understanding and identifying artificial and extraterrestrial life. The seven pillars are Program, Improvisation, Compartmentalization, Energy, Regeneration, Adaptability, and Seclusion. These can be abbreviated as PICERAS. The Seven Pillars Program Koshland defines ""Program"" as an ""organized plan that describes both the ingredients themselves and the kinetics of the interactions among ingredients as the living system persists through time."" In natural life as it is known on Earth, the program operates through the mechanisms of nucleic acids and amino acids, but the concept of program can apply to other imagined or undiscovered mechanisms. Improvisation ""Improvisation"" refers to the living system's ability to change its program in response to the larger environment in which it exists. An example of improvisation on earth is natural selection. Compartmentalization ""Compartmentalization"" refers to the separation of spaces in the living system that allow for separate environments for necessary chemical processes. Compartmentalization is necessary to protect the concentration of the ingredients for a reaction from outside environments. Energy Because living systems involve net movement in terms of chemical movement or body movement, and lose energy in those movements through entropy, energy is required for a living system to exist. The main source of energy on Earth is the sun, but other sources of energy exist for life on Earth, such as hydrogen gas or methane, used in chemosynthesis. Regeneration ""Regeneration"" in a living system refers to the general compensation for losses and degradation in the various components and processes in the system. This covers the thermodynamic loss in chemical reactions, the wear and tear of larger parts, and the large" https://en.wikipedia.org/wiki/Mini-STX,"Mini-STX (mSTX, Mini Socket Technology EXtended, originally ""Intel 5x5"") is a computer motherboard form factor that was released by Intel in 2015 (as ""Intel 5x5""). These motherboards measure 147mm by 140mm (5.8"" x 5.5""), making them larger than ""4x4"" NUC (102x102mm / 4.01"" x 4.01"" inches) and Nano-ITX (120x120mm / 4.7"" x 4.7"") boards, but notably smaller than the more common Mini-ITX (170x170mm / 6.7"" x 6.7"") boards. Unlike these standards, which use a square shape, the Mini-STX form factor is 7mm longer from front-to-rear, making it slightly rectangular. Mini-STX design elements The Mini-STX design suggests (but does not require) support for: Socketed processors (e.g. LGA or PGA CPUs) Onboard power regulation circuitry, enabling direct DC power input IO ports embedded on the front and rear of the motherboard (akin to NUC, but unlike typical motherboards which often use headers instead to connect built-in ports on enclosures) Adoption by manufacturers This motherboard form factor is still not in particularly common use with consumer-PC manufacturers, although there are a few offerings: ASRock offers both DeskMini kits (that use mini-STX boards) and standalone motherboards, Asus offer VivoMini kits (that use mini-STX boards) and standalone motherboards, Gigabyte offers a few motherboards, and industrial PC suppliers (e.g. Kontron, Iesy, ASRock Industrial) also provide some options for mini-STX equipment. Derivatives ASRock developed a derivative of mini-STX, dubbed micro-STX, for their 'DeskMini GTX/RX' small form-factor PCs and industrial motherboards. Micro-STX adds an MXM slot which allows the use of special PCI Express expansion cards, including graphics or machine learning accelerators, but increases the width of the board to be extended two inches, resulting in measurements of 147 x 188 mm (5.8"" x 7.4"")" https://en.wikipedia.org/wiki/List%20of%20computability%20and%20complexity%20topics,"This is a list of computability and complexity topics, by Wikipedia page. Computability theory is the part of the theory of computation that deals with what can be computed, in principle. Computational complexity theory deals with how hard computations are, in quantitative terms, both with upper bounds (algorithms whose complexity in the worst cases, as use of computing resources, can be estimated), and from below (proofs that no procedure to carry out some task can be very fast). For more abstract foundational matters, see the list of mathematical logic topics. See also list of algorithms, list of algorithm general topics. Calculation Lookup table Mathematical table Multiplication table Generating trigonometric tables History of computers Multiplication algorithm Peasant multiplication Division by two Exponentiating by squaring Addition chain Scholz conjecture Presburger arithmetic Computability theory: models of computation Arithmetic circuits Algorithm Procedure, recursion Finite state automaton Mealy machine Minsky register machine Moore machine State diagram State transition system Deterministic finite automaton Nondeterministic finite automaton Generalized nondeterministic finite automaton Regular language Pumping lemma Myhill-Nerode theorem Regular expression Regular grammar Prefix grammar Tree automaton Pushdown automaton Context-free grammar Büchi automaton Chomsky hierarchy Context-sensitive language, context-sensitive grammar Recursively enumerable language Register machine Stack machine Petri net Post machine Rewriting Markov algorithm Term rewriting String rewriting system L-system Knuth–Bendix completion algorithm Star height Star height problem Generalized star height problem Cellular automaton Rule 110 cellular automaton Conway's Game of Life Langton's ant Edge of chaos Turing machine Deterministic Turing machine Non-deterministic Turing machine Alternating automaton Alternating Turing machine Turing-complete Turing tarpit Oracle machine Lambda" https://en.wikipedia.org/wiki/Retinalophototroph,"A retinalophototroph is one of two different types of phototrophs, and are named for retinal-binding proteins (microbial rhodopsins) they utilize for cell signaling and converting light into energy. Like all photoautotrophs, retinalophototrophs absorb photons to initiate their cellular processes. However, unlike all photoautotrophs, retinalophototrophs do not use chlorophyll or an electron transport chain to power their chemical reactions. This means retinalophototrophs are incapable of traditional carbon fixation, a fundamental photosynthetic process that transforms inorganic carbon (carbon contained in molecular compounds like carbon dioxide) into organic compounds. For this reason, experts consider them to be less efficient than their chlorophyll-using counterparts, chlorophototrophs. Energy conversion Retinalophototrophs achieve adequate energy conversion via a proton-motive force. In retinalophototrophs, proton-motive force is generated from rhodopsin-like proteins, primarily bacteriorhodopsin and proteorhodopsin, acting as proton pumps along a cellular membrane. To capture photons needed for activating a protein pump, retinalophototrophs employ organic pigments known as carotenoids, namely beta-carotenoids. Beta-carotenoids present in retinalophototrophs are unusual candidates for energy conversion, but they possess high Vitamin-A activity necessary for retinaldehyde, or retinal, formation. Retinal, a chromophore molecule configured from Vitamin A, is formed when bonds between carotenoids are disrupted in a process called cleavage. Due to its acute light sensitivity, retinal is ideal for activation of proton-motive force and imparts a unique purple coloration to retinalophototrophs. Once retinal absorbs enough light, it isomerizes, thereby forcing a conformational (i.e., structural) change among the covalent bonds of the rhodopsin-like proteins. Upon activation, these proteins mimic a gateway, allowing passage of ions to create an electrochemical gradient b" https://en.wikipedia.org/wiki/Thermal%20death%20time,"Thermal death time is how long it takes to kill a specific bacterium at a specific temperature. It was originally developed for food canning and has found applications in cosmetics, producing salmonella-free feeds for animals (e.g. poultry) and pharmaceuticals. History In 1895, William Lyman Underwood of the Underwood Canning Company, a food company founded in 1822 at Boston, Massachusetts and later relocated to Watertown, Massachusetts, approached William Thompson Sedgwick, chair of the biology department at the Massachusetts Institute of Technology, about losses his company was suffering due to swollen and burst cans despite the newest retort technology available. Sedgwick gave his assistant, Samuel Cate Prescott, a detailed assignment on what needed to be done. Prescott and Underwood worked on the problem every afternoon from late 1895 to late 1896, focusing on canned clams. They first discovered that the clams contained heat-resistant bacterial spores that were able to survive the processing; then that these spores' presence depended on the clams' living environment; and finally that these spores would be killed if processed at 250 ˚F (121 ˚C) for ten minutes in a retort. These studies prompted the similar research of canned lobster, sardines, peas, tomatoes, corn, and spinach. Prescott and Underwood's work was first published in late 1896, with further papers appearing from 1897 to 1926. This research, though important to the growth of food technology, was never patented. It would pave the way for thermal death time research that was pioneered by Bigelow and C. Olin Ball from 1921 to 1936 at the National Canners Association (NCA). Bigelow and Ball's research focused on the thermal death time of Clostridium botulinum (C. botulinum) that was determined in the early 1920s. Research continued with inoculated canning pack studies that were published by the NCA in 1968. Mathematical formulas Thermal death time can be determined one of two ways: 1) by using graphs" https://en.wikipedia.org/wiki/Laboratory%20automation,"Laboratory automation is a multi-disciplinary strategy to research, develop, optimize and capitalize on technologies in the laboratory that enable new and improved processes. Laboratory automation professionals are academic, commercial and government researchers, scientists and engineers who conduct research and develop new technologies to increase productivity, elevate experimental data quality, reduce lab process cycle times, or enable experimentation that otherwise would be impossible. The most widely known application of laboratory automation technology is laboratory robotics. More generally, the field of laboratory automation comprises many different automated laboratory instruments, devices (the most common being autosamplers), software algorithms, and methodologies used to enable, expedite and increase the efficiency and effectiveness of scientific research in laboratories. The application of technology in today's laboratories is required to achieve timely progress and remain competitive. Laboratories devoted to activities such as high-throughput screening, combinatorial chemistry, automated clinical and analytical testing, diagnostics, large-scale biorepositories, and many others, would not exist without advancements in laboratory automation. Some universities offer entire programs that focus on lab technologies. For example, Indiana University-Purdue University at Indianapolis offers a graduate program devoted to Laboratory Informatics. Also, the Keck Graduate Institute in California offers a graduate degree with an emphasis on development of assays, instrumentation and data analysis tools required for clinical diagnostics, high-throughput screening, genotyping, microarray technologies, proteomics, imaging and other applications. History At least since 1875 there have been reports of automated devices for scientific investigation. These first devices were mostly built by scientists themselves in order to solve problems in the laboratory. After the s" https://en.wikipedia.org/wiki/Inequality%20%28mathematics%29,"In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. There are several different notations used to represent different kinds of inequalities: The notation a < b means that a is less than b. The notation a > b means that a is greater than b. In either case, a is not equal to b. These relations are known as strict inequalities, meaning that a is strictly less than or strictly greater than b. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: The notation a ≤ b or a ⩽ b means that a is less than or equal to b (or, equivalently, at most b, or not greater than b). The notation a ≥ b or a ⩾ b means that a is greater than or equal to b (or, equivalently, at least b, or not less than b). The relation not greater than can also be represented by a ≯ b, the symbol for ""greater than"" bisected by a slash, ""not"". The same is true for not less than and a ≮ b. The notation a ≠ b means that a is not equal to b; this inequation sometimes is considered a form of strict inequality. It does not say that one is greater than the other; it does not even require a and b to be member of an ordered set. In engineering sciences, less formal use of the notation is to state that one quantity is ""much greater"" than another, normally by several orders of magnitude. The notation a ≪ b means that a is much less than b. The notation a ≫ b means that a is much greater than b. This implies that the lesser value can be neglected with little effect on the accuracy of an approximation (such as the case of ultrarelativistic limit in physics). In all of the cases above, any two symbols mirroring each other are symmetrical; a < b and b > a are equivalent, etc. Properties on the number line Inequalities are governed by the following properties. All of these properties" https://en.wikipedia.org/wiki/Richards%27%20theorem,"Richards' theorem is a mathematical result due to Paul I. Richards in 1947. The theorem states that for, if is a positive-real function (PRF) then is a PRF for all real, positive values of . The theorem has applications in electrical network synthesis. The PRF property of an impedance function determines whether or not a passive network can be realised having that impedance. Richards' theorem led to a new method of realising such networks in the 1940s. Proof where is a PRF, is a positive real constant, and is the complex frequency variable, can be written as, where, Since is PRF then is also PRF. The zeroes of this function are the poles of . Since a PRF can have no zeroes in the right-half s-plane, then can have no poles in the right-half s-plane and hence is analytic in the right-half s-plane. Let Then the magnitude of is given by, Since the PRF condition requires that for all then for all . The maximum magnitude of occurs on the axis because is analytic in the right-half s-plane. Thus for . Let , then the real part of is given by, Because for then for and consequently must be a PRF. Richards' theorem can also be derived from Schwarz's lemma. Uses The theorem was introduced by Paul I. Richards as part of his investigation into the properties of PRFs. The term PRF was coined by Otto Brune who proved that the PRF property was a necessary and sufficient condition for a function to be realisable as a passive electrical network, an important result in network synthesis. Richards gave the theorem in his 1947 paper in the reduced form, that is, the special case where The theorem (with the more general casse of being able to take on any value) formed the basis of the network synthesis technique presented by Raoul Bott and Richard Duffin in 1949. In the Bott-Duffin synthesis, represents the electrical network to be synthesised and is another (unknown) network incorporated within it ( is unitless, but has units of impe" https://en.wikipedia.org/wiki/Dataflow%20architecture,"Dataflow architecture is a dataflow-based computer architecture that directly contrasts the traditional von Neumann architecture or control flow architecture. Dataflow architectures have no program counter, in concept: the executability and execution of instructions is solely determined based on the availability of input arguments to the instructions, so that the order of instruction execution may be hard to predict. Although no commercially successful general-purpose computer hardware has used a dataflow architecture, it has been successfully implemented in specialized hardware such as in digital signal processing, network routing, graphics processing, telemetry, and more recently in data warehousing, and artificial intelligence (as: polymorphic dataflow Convolution Engine, structure-driven, dataflow scheduling). It is also very relevant in many software architectures today including database engine designs and parallel computing frameworks. Synchronous dataflow architectures tune to match the workload presented by real-time data path applications such as wire speed packet forwarding. Dataflow architectures that are deterministic in nature enable programmers to manage complex tasks such as processor load balancing, synchronization and accesses to common resources. Meanwhile, there is a clash of terminology, since the term dataflow is used for a subarea of parallel programming: for dataflow programming. History Hardware architectures for dataflow was a major topic in computer architecture research in the 1970s and early 1980s. Jack Dennis of MIT pioneered the field of static dataflow architectures while the Manchester Dataflow Machine and MIT Tagged Token architecture were major projects in dynamic dataflow. The research, however, never overcame the problems related to: Efficiently broadcasting data tokens in a massively parallel system. Efficiently dispatching instruction tokens in a massively parallel system. Building content-addressable memory (CAM) " https://en.wikipedia.org/wiki/Pin%20compatibility,"In electronics, pin-compatible devices are electronic components, generally integrated circuits or expansion cards, sharing a common footprint and with the same functions assigned or usable on the same pins. Pin compatibility is a property desired by systems integrators as it allows a product to be updated without redesigning printed circuit boards, which can reduce costs and decrease time to market. Although devices which are pin-compatible share a common footprint, they are not necessarily electrically or thermally compatible. As a result, manufacturers often specify devices as being either pin-to-pin or drop-in compatible. Pin-compatible devices are generally produced to allow upgrading within a single product line, to allow end-of-life devices to be replaced with newer equivalents, or to compete with the equivalent products of other manufacturers. Pin-to-pin compatibility Pin-to-pin compatible devices share an assignment of functions to pins, but may have differing electrical characteristics (supply voltages, or oscillator frequencies) or thermal characteristics (TDPs, reflow curves, or temperature tolerances). As a result, their use in a system may require that portions of the system, such as its power delivery subsystem, be adapted to fit the new component. A common example of pin-to-pin compatible devices which may not be electrically compatible are the 7400 series integrated circuits. The 7400 series devices have been produced on a number of different manufacturing processes, but have retained the same pinouts throughout. For example, all 7405 devices provide six NOT gates (or inverters) but may have incompatible supply voltage tolerances. 7405 – Standard TTL, 4.75–5.25 V. 74C05 – CMOS, 4–15 V. 74LV05 – Low-voltage CMOS, 2.0–5.5 V. In other cases, particularly with computers, devices may be pin-to-pin compatible but made otherwise incompatible as a result of market segmentation. For example, Intel Skylake desktop-class Core and Xeon E3v5 processor" https://en.wikipedia.org/wiki/Thermal%20analysis,"Thermal analysis is a branch of materials science where the properties of materials are studied as they change with temperature. Several methods are commonly used – these are distinguished from one another by the property which is measured: Dielectric thermal analysis: dielectric permittivity and loss factor Differential thermal analysis: temperature difference versus temperature or time Differential scanning calorimetry: heat flow changes versus temperature or time Dilatometry: volume changes with temperature change Dynamic mechanical analysis: measures storage modulus (stiffness) and loss modulus (damping) versus temperature, time and frequency Evolved gas analysis: analysis of gases evolved during heating of a material, usually decomposition products Isothermal titration calorimetry Isothermal microcalorimetry Laser flash analysis: thermal diffusivity and thermal conductivity Thermogravimetric analysis: mass change versus temperature or time Thermomechanical analysis: dimensional changes versus temperature or time Thermo-optical analysis: optical properties Derivatography: A complex method in thermal analysis Simultaneous thermal analysis generally refers to the simultaneous application of thermogravimetry and differential scanning calorimetry to one and the same sample in a single instrument. The test conditions are perfectly identical for the thermogravimetric analysis and differential scanning calorimetry signals (same atmosphere, gas flow rate, vapor pressure of the sample, heating rate, thermal contact to the sample crucible and sensor, radiation effect, etc.). The information gathered can even be enhanced by coupling the simultaneous thermal analysis instrument to an Evolved Gas Analyzer like Fourier transform infrared spectroscopy or mass spectrometry. Other, less common, methods measure the sound or light emission from a sample, or the electrical discharge from a dielectric material, or the mechanical relaxation in a stressed specimen. T" https://en.wikipedia.org/wiki/Microwave%20Imaging%20Radiometer%20with%20Aperture%20Synthesis,"Microwave Imaging Radiometer with Aperture Synthesis (MIRAS) is the major instrument on the Soil Moisture and Ocean Salinity satellite (SMOS). MIRAS employs a planar antenna composed of a central body (the so-called hub) and three telescoping, deployable arms, in total 69 receivers on the Unit. Each receiver is composed of one Lightweight Cost-Effective Front-end (LICEF) module, which detects radiation in the microwave L-band, both in horizontal and vertical polarizations. The aperture on the LICEF detectors, planar in arrangement on MIRAS, point directly toward the Earth's surface as the satellite orbits. The arrangement and orientation of MIRAS makes the instrument a 2-D interferometric radiometer that generates brightness temperature images, from which both geophysical variables are computed. The salinity measurement requires demanding performance of the instrument in terms of calibration and stability. The MIRAS instrument's prime contractor was EADS CASA Espacio, manufacturing the payload of SMOS under ESA's contract. LICEF The LICEF detector is composed of a round patch antenna element, with 2 pairs of probes for orthogonal linear polarisations, feeding two receiver channels in a compact lightweight package behind the antenna. It picks up thermal radiation emitted by the Earth near 1.4 GHz in the microwave L-band, amplifies it 100 dB, and digitises it with 1-bit quantisation." https://en.wikipedia.org/wiki/Ethnocomputing,"Ethnocomputing is the study of the interactions between computing and culture. It is carried out through theoretical analysis, empirical investigation, and design implementation. It includes research on the impact of computing on society, as well as the reverse: how cultural, historical, personal, and societal origins and surroundings cause and affect the innovation, development, diffusion, maintenance, and appropriation of computational artifacts or ideas. From the ethnocomputing perspective, no computational technology is culturally ""neutral,"" and no cultural practice is a computational void. Instead of considering culture to be a hindrance for software engineering, culture should be seen as a resource for innovation and design. Subject matter Social categories for ethnocomputing include: Indigenous computing: In some cases, ethnocomputing ""translates"" from indigenous culture to high tech frameworks: for example, analyzing the African board game Owari as a one-dimensional cellular automaton. Social/historical studies of computing: In other cases ethnocomputing seeks to identify the social, cultural, historical, or personal dimensions of high tech computational ideas and artifacts: for example, the relationship between the Turing Test and Alan Turing's closeted gay identity. Appropriation in computing: lay persons who did not participate in the original design of a computing system can still affect it by modifying its interpretation, use, or structure. Such ""modding"" may be as subtle as the key board character ""emoticons"" created through lay use of email, or as blatant as the stylized customization of computer cases. Equity tools: a software ""Applications Quest"" has been developed for generating a ""diversity index"" that allows consideration of multiple identity characteristics in college admissions. Technical categories in ethnocomputing include: Organized structures and models used to represent information (data structures) Ways of manipulating the organiz" https://en.wikipedia.org/wiki/Vector%20notation,"In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For representing a vector, the common typographic convention is lower case, upright boldface type, as in . The International Organization for Standardization (ISO) recommends either bold italic serif, as in , or non-bold italic serif accented by a right arrow, as in . In advanced mathematics, vectors are often represented in a simple italic type, like any variable. History In 1835 Giusto Bellavitis introduced the idea of equipollent directed line segments which resulted in the concept of a vector as an equivalence class of such segments. The term vector was coined by W. R. Hamilton around 1843, as he revealed quaternions, a system which uses vectors and scalars to span a four-dimensional space. For a quaternion q = a + bi + cj + dk, Hamilton used two projections: S q = a, for the scalar part of q, and V q = bi + cj + dk, the vector part. Using the modern terms cross product (×) and dot product (.), the quaternion product of two vectors p and q can be written pq = –p.q + p×q. In 1878, W. K. Clifford severed the two products to make the quaternion operation useful for students in his textbook Elements of Dynamic. Lecturing at Yale University, Josiah Willard Gibbs supplied notation for the scalar product and vector products, which was introduced in Vector Analysis. In 1891, Oliver Heaviside argued for Clarendon to distinguish vectors from scalars. He criticized the use of Greek letters by Tait and Gothic letters by Maxwell. In 1912, J.B. Shaw contributed his ""Comparative Notation for Vector Expressions"" to the Bulletin of the Quaternion Society. Subsequently, Alexander Macfarlane described 15 criteria for clear expression with vectors in the same publication. Vector ideas were advanced by Hermann Grassmann in 1841, and again in 1862 in the German language. But German mathematicians wer" https://en.wikipedia.org/wiki/Physical%20object,"In common usage and classical mechanics, a physical object or physical body (or simply an object or body) is a collection of matter within a defined contiguous boundary in three-dimensional space. The boundary surface must be defined and identified by the properties of the material, although it may change over time. The boundary is usually the visible or tangible surface of the object. The matter in the object is constrained (to a greater or lesser degree) to move as one object. The boundary may move in space relative to other objects that it is not attached to (through translation and rotation). An object's boundary may also deform and change over time in other ways. Also in common usage, an object is not constrained to consist of the same collection of matter. Atoms or parts of an object may change over time. An object is usually meant to be defined by the simplest representation of the boundary consistent with the observations. However the laws of physics only apply directly to objects that consist of the same collection of matter. In physics, an object is an identifiable collection of matter, which may be constrained by an identifiable boundary, and may move as a unit by translation or rotation, in 3-dimensional space. Each object has a unique identity, independent of any other properties. Two objects may be identical, in all properties except position, but still remain distinguishable. In most cases the boundaries of two objects may not overlap at any point in time. The property of identity allows objects to be counted. Examples of models of physical bodies include, but are not limited to a particle, several interacting smaller bodies (particulate or otherwise), and continuous media. The common conception of physical objects includes that they have extension in the physical world, although there do exist theories of quantum physics and cosmology which arguably challenge this. In modern physics, ""extension"" is understood in terms of the spacetime: roughly s" https://en.wikipedia.org/wiki/Monoclonality,"In biology, monoclonality refers to the state of a line of cells that have been derived from a single clonal origin. Thus, ""monoclonal cells"" can be said to form a single clone. The term monoclonal comes . The process of replication can occur in vivo, or may be stimulated in vitro for laboratory manipulations. The use of the term typically implies that there is some method to distinguish between the cells of the original population from which the single ancestral cell is derived, such as a random genetic alteration, which is inherited by the progeny. Common usages of this term include: Monoclonal antibody: a single hybridoma cell, which by chance includes the appropriate V(D)J recombination to produce the desired antibody, is cloned to produce a large population of identical cells. In informal laboratory jargon, the monoclonal antibodies isolated from cell culture supernatants of these hybridoma clones (hybridoma lines) are simply called monoclonals. Monoclonal neoplasm (tumor): A single aberrant cell which has undergone carcinogenesis reproduces itself into a cancerous mass. Monoclonal plasma cell (also called plasma cell dyscrasia): A single aberrant plasma cell which has undergone carcinogenesis reproduces itself, which in some cases is cancerous." https://en.wikipedia.org/wiki/Vector%20potential,"In vector calculus, a vector potential is a vector field whose curl is a given vector field. This is analogous to a scalar potential, which is a scalar field whose gradient is a given vector field. Formally, given a vector field v, a vector potential is a vector field A such that Consequence If a vector field v admits a vector potential A, then from the equality (divergence of the curl is zero) one obtains which implies that v must be a solenoidal vector field. Theorem Let be a solenoidal vector field which is twice continuously differentiable. Assume that decreases at least as fast as for . Define Then, A is a vector potential for , that is,  Here, is curl for variable y. Substituting curl[v] for the current density j of the retarded potential, you will get this formula. In other words, v corresponds to the H-field. You can restrict the integral domain to any single-connected region Ω. That is, A' below is also a vector potential of v; A generalization of this theorem is the Helmholtz decomposition which states that any vector field can be decomposed as a sum of a solenoidal vector field and an irrotational vector field. By analogy with Biot-Savart's law, the following is also qualify as a vector potential for v. Substitute j (current density) for v and H (H-field)for A, we will find the Biot-Savart law. Let and let the Ω be a star domain centered on the p then, translating Poincaré's lemma for differential forms into vector fields world, the following is also a vector potential for the Nonuniqueness The vector potential admitted by a solenoidal field is not unique. If is a vector potential for , then so is where is any continuously differentiable scalar function. This follows from the fact that the curl of the gradient is zero. This nonuniqueness leads to a degree of freedom in the formulation of electrodynamics, or gauge freedom, and requires choosing a gauge. See also Fundamental theorem of vector calculus Magnetic vector potentia" https://en.wikipedia.org/wiki/Network-Integrated%20Multimedia%20Middleware,"The Network-Integrated Multimedia Middleware (NMM) is a flow graph based multimedia framework. NMM allows creating distributed multimedia applications: local and remote multimedia devices or software components can be controlled transparently and integrated into a common multimedia processing flow graph. NMM is implemented in C++, a programming language, and NMM-IDL, an interface description language (IDL). NMM is a set of cross-platform libraries and applications for the operating systems Linux, OS X, Windows, and others. A software development kit (SDK) is also provided. NMM is released under dual-licensing. The Linux, OS X, and PS3 versions are distributed for free as open-source software under the terms and conditions of the GNU General Public License (GPL). The Windows version is distributed for free as binary version under the terms and conditions of the NMM Non-Commercial License (NMM-NCL). All NMM versions (i.e., for all supported operating systems) are also distributed under a commercial license with full warranty, which allows developing closed-source proprietary software atop NMM. See also Java Media Framework DirectShow QuickTime Helix DNA MPlayer VLC media player (VLC) Video wall Sources Linux gains open source multimedia middleware KDE to gain cutting-edge multimedia technology Multimedia barriers drop at CeBIT in March A Survey of Software Infrastructures and Frameworks for Ubiquitous Computing External links NMM homepage Computer networking" https://en.wikipedia.org/wiki/Optical%20properties%20of%20water%20and%20ice,"The refractive index of water at 20 °C for visible light is 1.33. The refractive index of normal ice is 1.31 (from List of refractive indices). In general, an index of refraction is a complex number with real and imaginary parts, where the latter indicates the strength of absorption loss at a particular wavelength. In the visible part of the electromagnetic spectrum, the imaginary part of the refractive index is very small. However, water and ice absorb in infrared and close the infrared atmospheric window thereby contributing to the greenhouse effect ... The absorption spectrum of pure water is used in numerous applications, including light scattering and absorption by ice crystals and cloud water droplets, theories of the rainbow, determination of the single-scattering albedo, ocean color, and many others. Quantitative description of the refraction index Over the wavelengths from 0.2 μm to 1.2 μm, and over temperatures from −12 °C to 500 °C, the real part of the index of refraction of water can be calculated by the following empirical expression: Where: , , and and the appropriate constants are = 0.244257733, = 0.00974634476, = −0.00373234996, = 0.000268678472, = 0.0015892057, = 0.00245934259, = 0.90070492, = −0.0166626219, = 273.15 K, = 1000 kg/m3, = 589 nm, = 5.432937, and = 0.229202. In the above expression, T is the absolute temperature of water (in K), is the wavelength of light in nm, is the density of the water in kg/m3, and n is the real part of the index of refraction of water. Volumic mass of water In the above formula, the density of water also varies with temperature and is defined by: with: = −3.983035 °C = 301.797 °C = 522528.9 °C2 = 69.34881 °C = 999.974950 kg / m3 Refractive index (real and imaginary parts) for liquid water The total refractive index of water is given as m = n + ik. The absorption coefficient α' is used in the Beer–Lambert law with the prime here signifying base e convention. Values are for water " https://en.wikipedia.org/wiki/Analogue%20electronics,"Analogue electronics () are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels. The term ""analogue"" describes the proportional relationship between a signal and a voltage or current that represents the signal. The word analogue is derived from the Greek word meaning ""proportional"". Analogue signals An analogue signal uses some attribute of the medium to convey the signal's information. For example, an aneroid barometer uses the angular position of a needle on top of a contracting and expanding box as the signal to convey the information of changes in atmospheric pressure. Electrical signals may represent information by changing their voltage, current, frequency, or total charge. Information is converted from some other physical form (such as sound, light, temperature, pressure, position) to an electrical signal by a transducer which converts one type of energy into another (e.g. a microphone). The signals take any value from a given range, and each unique signal value represents different information. Any change in the signal is meaningful, and each level of the signal represents a different level of the phenomenon that it represents. For example, suppose the signal is being used to represent temperature, with one volt representing one degree Celsius. In such a system, 10 volts would represent 10 degrees, and 10.1 volts would represent 10.1 degrees. Another method of conveying an analogue signal is to use modulation. In this, some base carrier signal has one of its properties altered: amplitude modulation (AM) involves altering the amplitude of a sinusoidal voltage waveform by the source information, frequency modulation (FM) changes the frequency. Other techniques, such as phase modulation or changing the phase of the carrier signal, are also used. In an analogue sound recording, the variation in pressure of a sound striking a microphone creates a corresponding variation in t" https://en.wikipedia.org/wiki/List%20of%20partial%20differential%20equation%20topics,"This is a list of partial differential equation topics. General topics Partial differential equation Nonlinear partial differential equation list of nonlinear partial differential equations Boundary condition Boundary value problem Dirichlet problem, Dirichlet boundary condition Neumann boundary condition Stefan problem Wiener–Hopf problem Separation of variables Green's function Elliptic partial differential equation Singular perturbation Cauchy–Kovalevskaya theorem H-principle Atiyah–Singer index theorem Bäcklund transform Viscosity solution Weak solution Loewy decomposition of linear differential equations Specific partial differential equations Broer–Kaup equations Burgers' equation Euler equations Fokker–Planck equation Hamilton–Jacobi equation, Hamilton–Jacobi–Bellman equation Heat equation Laplace's equation Laplace operator Harmonic function Spherical harmonic Poisson integral formula Klein–Gordon equation Korteweg–de Vries equation Modified KdV–Burgers equation Maxwell's equations Navier–Stokes equations Poisson's equation Primitive equations (hydrodynamics) Schrödinger equation Wave equation Numerical methods for PDEs Finite difference Finite element method Finite volume method Boundary element method Multigrid Spectral method Computational fluid dynamics Alternating direction implicit Related areas of mathematics Calculus of variations Harmonic analysis Ordinary differential equation Sobolev space Partial differential equations" https://en.wikipedia.org/wiki/Address%20space,"In computing, an address space defines a range of discrete addresses, each of which may correspond to a network host, peripheral device, disk sector, a memory cell or other logical or physical entity. For software programs to save and retrieve stored data, each datum must have an address where it can be located. The number of address spaces available depends on the underlying address structure, which is usually limited by the computer architecture being used. Often an address space in a system with virtual memory corresponds to a highest level translation table, e.g., a segment table in IBM System/370. Address spaces are created by combining enough uniquely identified qualifiers to make an address unambiguous within the address space. For a person's physical address, the address space would be a combination of locations, such as a neighborhood, town, city, or country. Some elements of a data address space may be the same, but if any element in the address is different, addresses in said space will reference different entities. For example, there could be multiple buildings at the same address of ""32 Main Street"" but in different towns, demonstrating that different towns have different, although similarly arranged, street address spaces. An address space usually provides (or allows) a partitioning to several regions according to the mathematical structure it has. In the case of total order, as for memory addresses, these are simply chunks. Like the hierarchical design of postal addresses, some nested domain hierarchies appear as a directed ordered tree, such as with the Domain Name System or a directory structure. In the Internet, the Internet Assigned Numbers Authority (IANA) allocates ranges of IP addresses to various registries so each can manage their parts of the global Internet address space. Examples Uses of addresses include, but are not limited to the following: Memory addresses for main memory, memory-mapped I/O, as well as for virtual memory; Device " https://en.wikipedia.org/wiki/Bilinear%20time%E2%80%93frequency%20distribution,"Bilinear time–frequency distributions, or quadratic time–frequency distributions, arise in a sub-field of signal analysis and signal processing called time–frequency signal processing, and, in the statistical analysis of time series data. Such methods are used where one needs to deal with a situation where the frequency composition of a signal may be changing over time; this sub-field used to be called time–frequency signal analysis, and is now more often called time–frequency signal processing due to the progress in using these methods to a wide range of signal-processing problems. Background Methods for analysing time series, in both signal analysis and time series analysis, have been developed as essentially separate methodologies applicable to, and based in, either the time or the frequency domain. A mixed approach is required in time–frequency analysis techniques which are especially effective in analyzing non-stationary signals, whose frequency distribution and magnitude vary with time. Examples of these are acoustic signals. Classes of ""quadratic time-frequency distributions"" (or bilinear time–frequency distributions"") are used for time–frequency signal analysis. This class is similar in formulation to Cohen's class distribution function that was used in 1966 in the context of quantum mechanics. This distribution function is mathematically similar to a generalized time–frequency representation which utilizes bilinear transformations. Compared with other time–frequency analysis techniques, such as short-time Fourier transform (STFT), the bilinear-transformation (or quadratic time–frequency distributions) may not have higher clarity for most practical signals, but it provides an alternative framework to investigate new definitions and new methods. While it does suffer from an inherent cross-term contamination when analyzing multi-component signals, by using a carefully chosen window function(s), the interference can be significantly mitigated, at the expens" https://en.wikipedia.org/wiki/Shearlet,"In applied mathematical analysis, shearlets are a multiscale framework which allows efficient encoding of anisotropic features in multivariate problem classes. Originally, shearlets were introduced in 2006 for the analysis and sparse approximation of functions . They are a natural extension of wavelets, to accommodate the fact that multivariate functions are typically governed by anisotropic features such as edges in images, since wavelets, as isotropic objects, are not capable of capturing such phenomena. Shearlets are constructed by parabolic scaling, shearing, and translation applied to a few generating functions. At fine scales, they are essentially supported within skinny and directional ridges following the parabolic scaling law, which reads length² ≈ width. Similar to wavelets, shearlets arise from the affine group and allow a unified treatment of the continuum and digital situation leading to faithful implementations. Although they do not constitute an orthonormal basis for , they still form a frame allowing stable expansions of arbitrary functions . One of the most important properties of shearlets is their ability to provide optimally sparse approximations (in the sense of optimality in ) for cartoon-like functions . In imaging sciences, cartoon-like functions serve as a model for anisotropic features and are compactly supported in while being apart from a closed piecewise singularity curve with bounded curvature. The decay rate of the -error of the -term shearlet approximation obtained by taking the largest coefficients from the shearlet expansion is in fact optimal up to a log-factor: where the constant depends only on the maximum curvature of the singularity curve and the maximum magnitudes of , and . This approximation rate significantly improves the best -term approximation rate of wavelets providing only for such class of functions. Shearlets are to date the only directional representation system that provides sparse approximation of ani" https://en.wikipedia.org/wiki/RIPAC%20%28microprocessor%29,"RIPAC was a VLSI single-chip microprocessor designed for automatic recognition of the connected speech, one of the first of this use. The project of the microprocessor RIPAC started in 1984. RIPAC was aimed to provide efficient real-time speech recognition services to the italian telephone system provided by SIP. The microprocessor was presented in September 1986 at The Hague (Netherlands) at EUSPICO conference. It was composed of 70.000 transistors and structured as Harvard architecture. The name RIPAC is the acronym for ""Riconoscimento del PArlato Connesso"", that means ""Recognition of the connected speech"" in Italian. The microprocessor was designed by the Italian companies CSELT and ELSAG and was produced by SGS: a combination of Hidden Markov Model and Dynamic Time Warping algorithms was used for processing speech signals. It was able to do real-time speech recognition of Italian and many languages with a good affordability. The chip, issued by U.S. Patent No. 4,907,278, worked at first run." https://en.wikipedia.org/wiki/Biological%20pacemaker,"A biological pacemaker is one or more types of cellular components that, when ""implanted or injected into certain regions of the heart,"" produce specific electrical stimuli that mimic that of the body's natural pacemaker cells. Biological pacemakers are indicated for issues such as heart block, slow heart rate, and asynchronous heart ventricle contractions. The biological pacemaker is intended as an alternative to the artificial cardiac pacemaker that has been in human use since the late 1950s. Despite their success, several limitations and problems with artificial pacemakers have emerged during the past decades such as electrode fracture or damage to insulation, infection, re-operations for battery exchange, and venous thrombosis. The need for an alternative is most obvious in children, including premature newborn babies, where size mismatch and the fact that pacemaker leads do not grow with children are a problem. A more biological approach has been taken in order to mitigate many of these issues. However, the implanted biological pacemaker cells still typically need to be supplemented with an artificial pacemaker while the cells form the necessary electrical connections with cardiac tissue. History The first successful experiment with biological pacemakers was carried out by Arjang Ruhparwar 's group at Hannover Medical School in Germany using transplanted fetal heart muscle cells. The process was first introduced at the scientific sessions of the American Heart Association in Anaheim in 2001, and the results were published in 2002. A few months later, Eduardo Marban's group from Johns Hopkins University published the first successful gene-therapeutic approach towards the generation of pacemaking activity in otherwise non-pacemaking adult cardiomyocytes using a guinea pig model. The investigators postulated latent pacemaker capability in normal heart muscle cells. This potential ability is suppressed by the inward-rectifier potassium current Ik1 encoded by the" https://en.wikipedia.org/wiki/Alberto%20Diaspro,"Alberto Diaspro (born April 7, 1959, in Genoa, Italy) is an Italian scientist. He received his doctoral degree in electronic engineering from the university of Genoa, Italy, in 1983. He is full professor in applied physics at university of Genoa. He is research director of Nanoscopy Italian Institute of Technology. Alberto Diaspro is President of the Italian biophysical society SIBPA. In 2022 he got the Gregorio Weber Award for excellence in fluorescence." https://en.wikipedia.org/wiki/Electronic%20speed%20control,"An electronic speed control (ESC) is an electronic circuit that controls and regulates the speed of an electric motor. It may also provide reversing of the motor and dynamic braking. Miniature electronic speed controls are used in electrically powered radio controlled models. Full-size electric vehicles also have systems to control the speed of their drive motors. Function An electronic speed control follows a speed reference signal (derived from a throttle lever, joystick, or other manual input) and varies the switching rate of a network of field effect transistors (FETs). By adjusting the duty cycle or switching frequency of the transistors, the speed of the motor is changed. The rapid switching of the current flowing through the motor is what causes the motor itself to emit its characteristic high-pitched whine, especially noticeable at lower speeds. Different types of speed controls are required for brushed DC motors and brushless DC motors. A brushed motor can have its speed controlled by varying the voltage on its armature. (Industrially, motors with electromagnet field windings instead of permanent magnets can also have their speed controlled by adjusting the strength of the motor field current.) A brushless motor requires a different operating principle. The speed of the motor is varied by adjusting the timing of pulses of current delivered to the several windings of the motor. Brushless ESC systems basically create three-phase AC power, like a variable frequency drive, to run brushless motors. Brushless motors are popular with radio controlled airplane hobbyists because of their efficiency, power, longevity and light weight in comparison to traditional brushed motors. Brushless DC motor controllers are much more complicated than brushed motor controllers. The correct phase of the current fed to the motor varies with the motor rotation, which is to be taken into account by the ESC: Usually, back EMF from the motor windings is used to detect this" https://en.wikipedia.org/wiki/Pathological%20%28mathematics%29,"In mathematics, when a mathematical phenomenon runs counter to some intuition, then the phenomenon is sometimes called pathological. On the other hand, if a phenomenon does not run counter to intuition, it is sometimes called well-behaved. These terms are sometimes useful in mathematical research and teaching, but there is no strict mathematical definition of pathological or well-behaved. In analysis A classic example of a pathology is the Weierstrass function, a function that is continuous everywhere but differentiable nowhere. The sum of a differentiable function and the Weierstrass function is again continuous but nowhere differentiable; so there are at least as many such functions as differentiable functions. In fact, using the Baire category theorem, one can show that continuous functions are generically nowhere differentiable. Such examples were deemed pathological when they were first discovered: To quote Henri Poincaré: Since Poincaré, nowhere differentiable functions have been shown to appear in basic physical and biological processes such as Brownian motion and in applications such as the Black-Scholes model in finance. Counterexamples in Analysis is a whole book of such counterexamples. In topology One famous counterexample in topology is the Alexander horned sphere, showing that topologically embedding the sphere S2 in R3 may fail to separate the space cleanly. As a counterexample, it motivated mathematicians to define the tameness property, which suppresses the kind of wild behavior exhibited by the horned sphere, wild knot, and other similar examples. Like many other pathologies, the horned sphere in a sense plays on infinitely fine, recursively generated structure, which in the limit violates ordinary intuition. In this case, the topology of an ever-descending chain of interlocking loops of continuous pieces of the sphere in the limit fully reflects that of the common sphere, and one would expect the outside of it, after an embedding, to work " https://en.wikipedia.org/wiki/Kappa%20organism,"In biology, Kappa organism or Kappa particle refers to inheritable cytoplasmic symbionts, occurring in some strains of the ciliate Paramecium. Paramecium strains possessing the particles are known as ""killer paramecia"". They liberate a substance also known as paramecin into the culture medium that is lethal to Paramecium that do not contain kappa particles. Kappa particles are found in genotypes of Paramecium aurelia syngen 2 that carry the dominant gene K. Kappa particles are Feulgen-positive and stain with Giemsa after acid hydrolysis. The length of the particles is 0.2–0.5μ. While there was initial confusion over the status of kappa particles as viruses, bacteria, organelles, or mere nucleoprotein, the particles are intracellular bacterial symbionts called Caedibacter taeniospiralis. Caedibacter taeniospiralis contains cytoplasmic protein inclusions called R bodies which act as a toxin delivery system." https://en.wikipedia.org/wiki/Embedded%20C,"Embedded C is a set of language extensions for the C programming language by the C Standards Committee to address commonality issues that exist between C extensions for different embedded systems. Embedded C programming typically requires nonstandard extensions to the C language in order to support enhanced microprocessor features such as fixed-point arithmetic, multiple distinct memory banks, and basic I/O operations. The C Standards Committee produced a Technical Report, most recently revised in 2008 and reviewed in 2013, providing a common standard for all implementations to adhere to. It includes a number of features not available in normal C, such as fixed-point arithmetic, named address spaces and basic I/O hardware addressing. Embedded C uses most of the syntax and semantics of standard C, e.g., main() function, variable definition, datatype declaration, conditional statements (if, switch case), loops (while, for), functions, arrays and strings, structures and union, bit operations, macros, etc." https://en.wikipedia.org/wiki/Pulse-width%20modulation,"Pulse-width modulation (PWM), also known as pulse-duration modulation (PDM) or pulse-length modulation (PLM), is a method of controlling the average power or amplitude delivered by an electrical signal. The average value of voltage (and current) fed to the load is controlled by switching the supply between 0 and 100% at a rate faster than it takes the load to change significantly. The longer the switch is on, the higher the total power supplied to the load. Along with maximum power point tracking (MPPT), it is one of the primary methods of controlling the output of solar panels to that which can be utilized by a battery. PWM is particularly suited for running inertial loads such as motors, which are not as easily affected by this discrete switching. The goal of PWM is to control a load; however, the PWM switching frequency must be selected carefully in order to smoothly do so. The PWM switching frequency can vary greatly depending on load and application. For example, switching only has to be done several times a minute in an electric stove; 100 or 120 Hz (double of the utility frequency) in a lamp dimmer; between a few kilohertz (kHz) and tens of kHz for a motor drive; and well into the tens or hundreds of kHz in audio amplifiers and computer power supplies. Choosing a switching frequency that is too high for the application results in smooth control of the load, but may cause premature failure of the mechanical control components. Selecting a switching frequency that is too low for the application causes oscillations in the load. The main advantage of PWM is that power loss in the switching devices is very low. When a switch is off there is practically no current, and when it is on and power is being transferred to the load, there is almost no voltage drop across the switch. Power loss, being the product of voltage and current, is thus in both cases close to zero. PWM also works well with digital controls, which, because of their on/off nature, can easily set the" https://en.wikipedia.org/wiki/Cospeciation,"Cospeciation is a form of coevolution in which the speciation of one species dictates speciation of another species and is most commonly studied in host-parasite relationships. In the case of a host-parasite relationship, if two hosts of the same species get within close proximity of each other, parasites of the same species from each host are able to move between individuals and mate with the parasites on the other host. However, if a speciation event occurs in the host species, the parasites will no longer be able to ""cross over"" because the two new host species no longer mate and, if the speciation event is due to a geographic separation, it is very unlikely the two hosts will interact at all with each other. The lack of proximity between the hosts ultimately prevents the populations of parasites from interacting and mating. This can ultimately lead to speciation within the parasite. According to Fahrenholz's rule, first proposed by Heinrich Fahrenholz in 1913, when host-parasite cospeciation has occurred, the phylogenies of the host and parasite come to mirror each other. In host-parasite phylogenies, and all species phylogenies for that matter, perfect mirroring is rare. Host-parasite phylogenies can be altered by host switching, extinction, independent speciation, and other ecological events, making cospeciation harder to detect. However, cospeciation is not limited to parasitism, but has been documented in symbiotic relationships like those of gut microbes in primates. Fahrenholz's rule In 1913, Heinrich Fahrenholz proposed that the phylogenies of both the host and parasite will eventually become congruent, or mirror each other when cospeciation occurs. More specifically, more closely related parasite species will be found on closely related species of host. Thus, to determine if cospeciation has occurred within a host-parasite relationship, scientists have used comparative analyses on the host and parasite phylogenies. In 1968, Daniel Janzen proposed an " https://en.wikipedia.org/wiki/Super-resolution%20imaging,"Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced. In some radar and sonar imaging applications (e.g. magnetic resonance imaging (MRI), high-resolution computed tomography), subspace decomposition-based methods (e.g. MUSIC) and compressed sensing-based algorithms (e.g., SAMV) are employed to achieve SR over standard periodogram algorithm. Super-resolution imaging techniques are used in general image processing and in super-resolution microscopy. Basic concepts Because some of the ideas surrounding super-resolution raise fundamental issues, there is need at the outset to examine the relevant physical and information-theoretical principles: Diffraction limit: The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by laws of physics, whether formulated by the diffraction equations in the wave theory of light or equivalently the uncertainty principle for photons in quantum mechanics. Information transfer can never be increased beyond this boundary, but packets outside the limits can be cleverly swapped for (or multiplexed with) some inside it. One does not so much “break” as “run around” the diffraction limit. New procedures probing electro-magnetic disturbances at the molecular level (in the so-called near field) remain fully consistent with Maxwell's equations. Spatial-frequency domain: A succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred in" https://en.wikipedia.org/wiki/Packard%20Bell%20Statesman,"The Packard Bell Statesman was an economy line of notebook computers introduced in 1993 by Packard Bell. They were slower in performance and lacked features compared to most competitor products, but they were lower in price. It was created in a collaboration between Packard Bell and Zenith Data Systems. The Statesman series was essentially a rebrand of Zenith Data Systems Z-Star 433 series, with the only notable difference of the logo in the middle and text on the front bezel. History In June 1993 Zenith Data Systems announced an alliance with Packard Bell. Zenith acquired about 20% of Packard Bell and they would both now work together to design and build PC's. Zenith would also provide Packard Bell with private-label versions of their portable PC's. The Packard Bell Statesman was a rebrand of the Zenith Z-Star notebook computer series. While the Statesman was being advertised by Packard Bell, the Z-Star series was also still being sold by Zenith. The Statesman was first introduced on October 4, 1993. Prices started at $1,500 for a monochrome or color DSTN model with a 33 MHz Cyrix Cx486SLC, 4 MB of RAM, 200 MB hard disk drive, internal 1.44 MB floppy disk drive, and MS-DOS 6.0 with Windows 3.1 for the included software. A ""J mouse"" pointing device was included, similar to the TrackPoint. The Statesman was expected to begin shipping within the next few weeks. Specifications Hardware CPU The first two models, the 200M and 200C, used the Cyrix Cx486SLC. This was Cyrix's first processor, which was actually a 386SX with on-board L1 cache and 486 instructions, being known as a ""hybrid chip"". The processor was clocked at 33 MHz and had 1 KB of L1 cache. It was a 16-bit processor and was pin compatible with the Intel 80386SX. On the bottom of the unit, the motherboard had an empty socket for a Cyrix FasMath co-processor, which could improve floating-point math performance. The 200M and 200C plus models had a Cyrix Cx486SLC2 clocked at 50 MHz, which was 50% faster t" https://en.wikipedia.org/wiki/Natural%20logarithm%20of%202,"The decimal value of the natural logarithm of 2 is approximately The logarithm of 2 in other bases is obtained with the formula The common logarithm in particular is () The inverse of this number is the binary logarithm of 10: (). By the Lindemann–Weierstrass theorem, the natural logarithm of any natural number other than 0 and 1 (more generally, of any positive algebraic number other than 1) is a transcendental number. Series representations Rising alternate factorial This is the well-known ""alternating harmonic series"". Binary rising constant factorial Other series representations using (sums of the reciprocals of decagonal numbers) Involving the Riemann Zeta function ( is the Euler–Mascheroni constant and Riemann's zeta function.) BBP-type representations (See more about Bailey–Borwein–Plouffe (BBP)-type representations.) Applying the three general series for natural logarithm to 2 directly gives: Applying them to gives: Applying them to gives: Applying them to gives: Representation as integrals The natural logarithm of 2 occurs frequently as the result of integration. Some explicit formulas for it include: Other representations The Pierce expansion is The Engel expansion is The cotangent expansion is The simple continued fraction expansion is , which yields rational approximations, the first few of which are 0, 1, 2/3, 7/10, 9/13 and 61/88. This generalized continued fraction: , also expressible as Bootstrapping other logarithms Given a value of , a scheme of computing the logarithms of other integers is to tabulate the logarithms of the prime numbers and in the next layer the logarithms of the composite numbers based on their factorizations This employs In a third layer, the logarithms of rational numbers are computed with , and logarithms of roots via . The logarithm of 2 is useful in the sense that the powers of 2 are rather densely distributed; finding powers close to powers of other numbers is comparatively eas" https://en.wikipedia.org/wiki/Software%20design,"Software design is the process by which an agent creates a specification of a software artifact intended to accomplish goals, using a set of primitive components and subject to constraints. The term is sometimes used broadly to refer to ""all the activity involved in conceptualizing, framing, implementing, commissioning, and ultimately modifying"" the software, or more specifically ""the activity following requirements specification and before programming, as ... [in] a stylized software engineering process."" Software design usually involves problem-solving and planning a software solution. This includes both a low-level component and algorithm design and a high-level, architecture design. Overview Software design is the process of envisioning and defining software solutions to one or more sets of problems. One of the main components of software design is the software requirements analysis (SRA). SRA is a part of the software development process that lists specifications used in software engineering. If the software is ""semi-automated"" or user centered, software design may involve user experience design yielding a storyboard to help determine those specifications. If the software is completely automated (meaning no user or user interface), a software design may be as simple as a flow chart or text describing a planned sequence of events. There are also semi-standard methods like Unified Modeling Language and Fundamental modeling concepts. In either case, some documentation of the plan is usually the product of the design. Furthermore, a software design may be platform-independent or platform-specific, depending upon the availability of the technology used for the design. The main difference between software analysis and design is that the output of a software analysis consists of smaller problems to solve. Additionally, the analysis should not be designed very differently across different team members or groups. In contrast, the design focuses on capabilities, a" https://en.wikipedia.org/wiki/Wiles%27s%20proof%20of%20Fermat%27s%20Last%20Theorem,"Wiles's proof of Fermat's Last Theorem is a proof by British mathematician Andrew Wiles of a special case of the modularity theorem for elliptic curves. Together with Ribet's theorem, it provides a proof for Fermat's Last Theorem. Both Fermat's Last Theorem and the modularity theorem were almost universally considered inaccessible to proof by contemporaneous mathematicians, meaning that they were believed to be impossible to prove using current knowledge. Wiles first announced his proof on 23 June 1993 at a lecture in Cambridge entitled ""Modular Forms, Elliptic Curves and Galois Representations"". However, in September 1993 the proof was found to contain an error. One year later on 19 September 1994, in what he would call ""the most important moment of [his] working life"", Wiles stumbled upon a revelation that allowed him to correct the proof to the satisfaction of the mathematical community. The corrected proof was published in 1995. Wiles's proof uses many techniques from algebraic geometry and number theory, and has many ramifications in these branches of mathematics. It also uses standard constructions of modern algebraic geometry, such as the category of schemes and Iwasawa theory, and other 20th-century techniques which were not available to Fermat. The proof's method of identification of a deformation ring with a Hecke algebra (now referred to as an R=T theorem) to prove modularity lifting theorems has been an influential development in algebraic number theory. Together, the two papers which contain the proof are 129 pages long, and consumed over seven years of Wiles's research time. John Coates described the proof as one of the highest achievements of number theory, and John Conway called it ""the proof of the [20th] century."" Wiles's path to proving Fermat's Last Theorem, by way of proving the modularity theorem for the special case of semistable elliptic curves, established powerful modularity lifting techniques and opened up entire new approaches to numer" https://en.wikipedia.org/wiki/Vedic%20Mathematics,"Vedic Mathematics is a book written by the Indian monk Bharati Krishna Tirtha, and first published in 1965. It contains a list of mathematical techniques, which were falsely claimed to have been retrieved from the Vedas and to contain advanced mathematical knowledge. Krishna Tirtha failed to produce the sources, and scholars unanimously note it to be a mere compendium of tricks for increasing the speed of elementary mathematical calculations sharing no overlap with historical mathematical developments during the Vedic period. However, there has been a proliferation of publications in this area and multiple attempts to integrate the subject into mainstream education by right-wing Hindu nationalist governments. Contents The book contains metaphorical aphorisms in the form of sixteen sutras and thirteen sub-sutras, which Krishna Tirtha states allude to significant mathematical tools. The range of their asserted applications spans from topic as diverse as statics and pneumatics to astronomy and financial domains. Tirtha stated that no part of advanced mathematics lay beyond the realms of his book and propounded that studying it for a couple of hours every day for a year equated to spending about two decades in any standardized education system to become professionally trained in the discipline of mathematics. STS scholar S. G. Dani in 'Vedic Mathematics': Myth and Reality states that the book is primarily a compendium of tricks that can be applied in elementary, middle and high school arithmetic and algebra, to gain faster results. The sutras and sub-sutras are abstract literary expressions (for example, ""as much less"" or ""one less than previous one"") prone to creative interpretations; Krishna Tirtha exploited this to the extent of manipulating the same shloka to generate widely different mathematical equivalencies across a multitude of contexts. Source and relation with The Vedas According to Krishna Tirtha, the sutras and other accessory content were found after" https://en.wikipedia.org/wiki/Wireless%20onion%20router,"A wireless onion router is a router that uses Tor to connect securely to a network. The onion router allows the user to connect to the internet anonymously creating an anonymous connection. Tor works using an overlaid network which is free throughout the world, this overlay network is created by using numerous relay points created using volunteer which helps the user hide personal information behind layers of encrypted data like layers of an onion. Routers are being created using Raspberry Pi adding a wireless module or using its own inbuilt wireless module in the later versions. This router provides encryption at the seventh layer (application layer) of the OSI model, which makes it transparent encryption, the user does not have to think about how the data will be sent or received. The encrypted data includes the destination and origin IP address of the data and the current relay point only knows the previous and the next hop of the encrypted packet. These relay points are selected in a random order and can only decrypt a single layer before forwarding it to the next hop where is the procedure is followed unless it is the destination point. Applications A wireless router which can use the onion router network can be used to keep the user safe from hackers or network sniffers. The data captured by them won't make any sense as it will only look like messed up text. These are small and handy which will give the user a freedom to carry this tool and connect to the network from anywhere. This setup does not require installation of Tor browser on the work station. Whistle blowers and NGO workers use this network to pass information or to talk to their family without disclosing any information. The applications of wireless onion router are common to a normal router, it provides access that allows it to be placed at a site and users can get connected. Tor can be used in security focused Operating Systems, messengers, browsers. These can be anonymised using Tor network. " https://en.wikipedia.org/wiki/Product-family%20engineering,"Product-family engineering (PFE), also known as product-line engineering, is based on the ideas of ""domain engineering"" created by the Software Engineering Institute, a term coined by James Neighbors in his 1980 dissertation at University of California, Irvine. Software product lines are quite common in our daily lives, but before a product family can be successfully established, an extensive process has to be followed. This process is known as product-family engineering. Product-family engineering can be defined as a method that creates an underlying architecture of an organization's product platform. It provides an architecture that is based on commonality as well as planned variabilities. The various product variants can be derived from the basic product family, which creates the opportunity to reuse and differentiate on products in the family. Product-family engineering is conceptually similar to the widespread use of vehicle platforms in the automotive industry. Product-family engineering is a relatively new approach to the creation of new products. It focuses on the process of engineering new products in such a way that it is possible to reuse product components and apply variability with decreased costs and time. Product-family engineering is all about reusing components and structures as much as possible. Several studies have proven that using a product-family engineering approach for product development can have several benefits. Here is a list of some of them: Higher productivity Higher quality Faster time-to-market Lower labor needs The Nokia case mentioned below also illustrates these benefits. Overall process The product family engineering process consists of several phases. The three main phases are: Phase 1: Product management Phase 2: Domain engineering Phase 3: Product engineering The process has been modeled on a higher abstraction level. This has the advantage that it can be applied to all kinds of product lines and families, not on" https://en.wikipedia.org/wiki/OSIAN,"OSIAN, or Open Source IPv6 Automation Network, is a free and open-source implementation of IPv6 networking for wireless sensor networks (WSNs). OSIAN extends TinyOS, which started as a collaboration between the University of California, Berkeley in co-operation with Intel Research and Crossbow Technology, and has since grown to be an international consortium, the TinyOS Alliance. OSIAN brings direct Internet-connectivity to smartdust technology. Design Architecturally, OSIAN treats TinyOS as the underlying operating system providing hardware drivers, while OSIAN itself adds Internet networking capabilities. Users are able to download and install OSIAN-enabled firmware to their embedded hardware, form a PPP connection with their computer, and communicate raw IPv6 UDP to other wireless sensors from their favorite programming language on their computer. OSIAN is developed using a style very much like the development of Linux, which requires peer reviews and unit testing before any code moves into core repositories. Platforms OSIAN is designed for deeply embedded systems with very small amounts of memory. One primary platform contains a TI MSP430-based CC430 system-on-a-chip, which contains 32 kB ROM and 4 kB RAM. See also TinyOS Contiki 6LoWPAN External links SuRF Developer Kit supporting OSIAN Wireless sensor network Embedded systems" https://en.wikipedia.org/wiki/Kostant%27s%20convexity%20theorem,"In mathematics, Kostant's convexity theorem, introduced by , states that the projection of every coadjoint orbit of a connected compact Lie group into the dual of a Cartan subalgebra is a convex set. It is a special case of a more general result for symmetric spaces. Kostant's theorem is a generalization of a result of , and for hermitian matrices. They proved that the projection onto the diagonal matrices of the space of all n by n complex self-adjoint matrices with given eigenvalues Λ = (λ1, ..., λn) is the convex polytope with vertices all permutations of the coordinates of Λ. Kostant used this to generalize the Golden–Thompson inequality to all compact groups. Compact Lie groups Let K be a connected compact Lie group with maximal torus T and Weyl group W = NK(T)/T. Let their Lie algebras be and . Let P be the orthogonal projection of onto for some Ad-invariant inner product on . Then for X in , P(Ad(K)⋅X) is the convex polytope with vertices w(X) where w runs over the Weyl group. Symmetric spaces Let G be a compact Lie group and σ an involution with K a compact subgroup fixed by σ and containing the identity component of the fixed point subgroup of σ. Thus G/K is a symmetric space of compact type. Let and be their Lie algebras and let σ also denote the corresponding involution of . Let be the −1 eigenspace of σ and let be a maximal Abelian subspace. Let Q be the orthogonal projection of onto for some Ad(K)-invariant inner product on . Then for X in , Q(Ad(K)⋅X) is the convex polytope with vertices the w(X) where w runs over the restricted Weyl group (the normalizer of in K modulo its centralizer). The case of a compact Lie group is the special case where G = K × K, K is embedded diagonally and σ is the automorphism of G interchanging the two factors. Proof for a compact Lie group Kostant's proof for symmetric spaces is given in . There is an elementary proof just for compact Lie groups using similar ideas, due to : it is based on a generaliza" https://en.wikipedia.org/wiki/Infix%20notation,"Infix notation is the notation commonly used in arithmetical and logical formulae and statements. It is characterized by the placement of operators between operands—""infixed operators""—such as the plus sign in . Usage Binary relations are often denoted by an infix symbol such as set membership a ∈ A when the set A has a for an element. In geometry, perpendicular lines a and b are denoted and in projective geometry two points b and c are in perspective when while they are connected by a projectivity when Infix notation is more difficult to parse by computers than prefix notation (e.g. + 2 2) or postfix notation (e.g. 2 2 +). However many programming languages use it due to its familiarity. It is more used in arithmetic, e.g. 5 × 6. Further notations Infix notation may also be distinguished from function notation, where the name of a function suggests a particular operation, and its arguments are the operands. An example of such a function notation would be S(1, 3) in which the function S denotes addition (""sum""): . Order of operations In infix notation, unlike in prefix or postfix notations, parentheses surrounding groups of operands and operators are necessary to indicate the intended order in which operations are to be performed. In the absence of parentheses, certain precedence rules determine the order of operations. See also Tree traversal: Infix (In-order) is also a tree traversal order. It is described in a more detailed manner on this page. Calculator input methods: comparison of notations as used by pocket calculators Postfix notation, also called Reverse Polish notation Prefix notation, also called Polish notation Shunting yard algorithm, used to convert infix notation to postfix notation or to a tree Operator (computer programming) Subject Verb Object" https://en.wikipedia.org/wiki/Catabiosis,"Catabiosis is the process of growing older, aging and physical degradation. The word comes from Greek ""kata""—down, against, reverse and ""biosis""—way of life and is generally used to describe senescence and degeneration in living organisms and biophysics of aging in general. One of the popular catabiotic theories is the entropy theory of aging, where aging is characterized by thermodynamically favourable increase in structural disorder. Living organisms are open systems that take free energy from the environment and offload their entropy as waste. However, basic components of living systems—DNA, proteins, lipids and sugars—tend towards the state of maximum entropy while continuously accumulating damages causing catabiosis of the living structure. Catabiotic force on the contrary is the influence exerted by living structures on adjoining cells, by which the latter are developed in harmony with the primary structures. See also Onpedia definition of catabiosis Catabiotic force Dictionary.com - Catabiosis DNA damage theory of aging Medical aspects of death Biology terminology Senescence" https://en.wikipedia.org/wiki/Photon%20noise,"Photon noise is the randomness in signal associated with photons arriving at a detector. For a simple black body emitting on an absorber, the noise-equivalent power is given by where is the Planck constant, is the central frequency, is the bandwidth, is the occupation number and is the optical efficiency. The first term is essentially shot noise whereas the second term is related to the bosonic character of photons, variously known as ""Bose noise"" or ""wave noise"". At low occupation number, such as in the visible spectrum, the shot noise term dominates. At high occupation number, however, typical of the radio spectrum, the Bose term dominates. See also Hanbury Brown and Twiss effect Phonon noise" https://en.wikipedia.org/wiki/Signal%20averaging,"Signal averaging is a signal processing technique applied in the time domain, intended to increase the strength of a signal relative to noise that is obscuring it. By averaging a set of replicate measurements, the signal-to-noise ratio (SNR) will be increased, ideally in proportion to the square root of the number of measurements. Deriving the SNR for averaged signals Assumed that Signal is uncorrelated to noise, and noise is uncorrelated : . Signal power is constant in the replicate measurements. Noise is random, with a mean of zero and constant variance in the replicate measurements: and . We (canonically) define Signal-to-Noise ratio as . Noise power for sampled signals Assuming we sample the noise, we get a per-sample variance of . Averaging a random variable leads to the following variance: . Since noise variance is constant : , demonstrating that averaging realizations of the same, uncorrelated noise reduces noise power by a factor of , and reduces noise level by a factor of . Signal power for sampled signals Considering vectors of signal samples of length : , the power of such a vector simply is . Again, averaging the vectors , yields the following averaged vector . In the case where , we see that reaches a maximum of . In this case, the ratio of signal to noise also reaches a maximum, . This is the oversampling case, where the observed signal is correlated (because oversampling implies that the signal observations are strongly correlated). Time-locked signals Averaging is applied to enhance a time-locked signal component in noisy measurements; time-locking implies that the signal is observation-periodic, so we end up in the maximum case above. Averaging odd and even trials A specific way of obtaining replicates is to average all the odd and even trials in separate buffers. This has the advantage of allowing for comparison of even and odd results from interleaved trials. An average of odd and even averages generates" https://en.wikipedia.org/wiki/Teknomo%E2%80%93Fernandez%20algorithm,"The Teknomo–Fernandez algorithm (TF algorithm), is an efficient algorithm for generating the background image of a given video sequence. By assuming that the background image is shown in the majority of the video, the algorithm is able to generate a good background image of a video in -time using only a small number of binary operations and Boolean bit operations, which require a small amount of memory and has built-in operators found in many programming languages such as C, C++, and Java. History People tracking from videos usually involves some form of background subtraction to segment foreground from background. Once foreground images are extracted, then desired algorithms (such as those for motion tracking, object tracking, and facial recognition) may be executed using these images. However, background subtraction requires that the background image is already available and unfortunately, this is not always the case. Traditionally, the background image is searched for manually or automatically from the video images when there are no objects. More recently, automatic background generation through object detection, medial filtering, medoid filtering, approximated median filtering, linear predictive filter, non-parametric model, Kalman filter, and adaptive smoothening have been suggested; however, most of these methods have high computational complexity and are resource-intensive. The Teknomo–Fernandez algorithm is also an automatic background generation algorithm. Its advantage, however, is its computational speed of only -time, depending on the resolution of an image and its accuracy gained within a manageable number of frames. Only at least three frames from a video is needed to produce the background image assuming that for every pixel position, the background occurs in the majority of the videos. Furthermore, it can be performed for both grayscale and colored videos. Assumptions The camera is stationary. The light of the environment changes only slowly" https://en.wikipedia.org/wiki/Local%20Management%20Interface,"Local Management Interface (LMI) is a term for some signaling standards used in networks, namely Frame Relay and Carrier Ethernet. Frame Relay LMI is a set of signalling standards between routers and Frame Relay switches. Communication takes place between a router and the first Frame Relay switch to which it is connected. Information about keepalives, global addressing, IP multicast and the status of virtual circuits is commonly exchanged using LMI. There are three standards for LMI: Using DLCI 0: ANSI's T1.617 Annex D standard ITU-T's Q.933 Annex A standard Using DLCI 1023: The ""Gang of Four"" standard, developed by Cisco, DEC, StrataCom and Nortel Carrier Ethernet Ethernet Local Management Interface (E-LMI) is an Ethernet layer operation, administration, and management (OAM) protocol defined by the Metro Ethernet Forum (MEF) for Carrier Ethernet networks. It provides information that enables auto configuration of customer edge (CE) devices." https://en.wikipedia.org/wiki/City%20Nature%20Challenge,"The City Nature Challenge is an annual, global, community science competition to document urban biodiversity. The challenge is a bioblitz that engages residents and visitors to find and document plants, animals, and other organisms living in urban areas. The goals are to engage the public in the collection of biodiversity data, with three awards each year for the cities that make the most observations, find the most species, and engage the most people. Participants primarily use the iNaturalist app and website to document their observations, though some areas use other platforms, such as Natusfera in Spain. The observation period is followed by several days of identification and the final announcement of winners. Participants need not know how to identify the species; help is provided through iNaturalist's automated species identification feature as well as the community of users on iNaturalist, including professional scientists and expert naturalists. History The City Nature Challenge was founded by Alison Young and Rebecca Johnson of the California Academy of Sciences and Lila Higgins of the Natural History Museum of Los Angeles County. The first challenge was in the spring of 2016 between Los Angeles and San Francisco. Participants documented over 20,000 observations with the iNaturalist platform. In 2017, the challenge expanded to 16 cities across the United States and participants collected over 125,000 observations of wildlife in 5 days. In 2018, the challenge expanded to 68 cities across the world. In four days, over 441,000 observations of more than 18,000 species were observed, and over 17,000 people participated. The 2019 challenge more than doubled in scale, with almost a million observations of over 31,000 species observed by around 35,000 people. Taking the competition beyond its US roots, the 2019 event was a much more international affair, with the winning city for observations and species coming from Africa (Cape Town), and three South American " https://en.wikipedia.org/wiki/Process%20control%20monitoring,"In the application of integrated circuits, process control monitoring (PCM) is the procedure followed to obtain detailed information about the process used. PCM is associated with designing and fabricating special structures that can monitor technology specific parameters such as Vth in CMOS and Vbe in bipolars. These structures are placed across the wafer at specific locations along with the chip produced so that a closer look into the process variation is possible. Integrated circuits" https://en.wikipedia.org/wiki/List%20of%20atmospheric%20optical%20phenomena,"Atmospheric optical phenomena include: Afterglow Airglow Alexander's band, the dark region between the two bows of a double rainbow. Alpenglow Anthelion Anticrepuscular rays Aurora Auroral light (northern and southern lights, aurora borealis and aurora australis) Belt of Venus Brocken Spectre Circumhorizontal arc Circumzenithal arc Cloud iridescence Crepuscular rays Earth's shadow Earthquake lights Glories Green flash Halos, of Sun or Moon, including sun dogs Haze Heiligenschein or halo effect, partly caused by the opposition effect Ice blink Light pillar Lightning Mirages (including Fata Morgana) Monochrome Rainbow Moon dog Moonbow Nacreous cloud/Polar stratospheric cloud Rainbow Subsun Sun dog Tangent arc Tyndall effect Upper-atmospheric lightning, including red sprites, Blue jets, and ELVES Water sky See also" https://en.wikipedia.org/wiki/List%20of%20aperiodic%20sets%20of%20tiles,"In geometry, a tiling is a partition of the plane (or any other geometric setting) into closed sets (called tiles), without gaps or overlaps (other than the boundaries of the tiles). A tiling is considered periodic if there exist translations in two independent directions which map the tiling onto itself. Such a tiling is composed of a single fundamental unit or primitive cell which repeats endlessly and regularly in two independent directions. An example of such a tiling is shown in the adjacent diagram (see the image description for more information). A tiling that cannot be constructed from a single primitive cell is called nonperiodic. If a given set of tiles allows only nonperiodic tilings, then this set of tiles is called aperiodic. The tilings obtained from an aperiodic set of tiles are often called aperiodic tilings, though strictly speaking it is the tiles themselves that are aperiodic. (The tiling itself is said to be ""nonperiodic"".) The first table explains the abbreviations used in the second table. The second table contains all known aperiodic sets of tiles and gives some additional basic information about each set. This list of tiles is still incomplete. Explanations List" https://en.wikipedia.org/wiki/Proof%20of%20impossibility,"In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic. The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers. Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved because the number is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge. Two other classical problems—trisecting the general angle and doubling the cube—were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures. A problem that arose in the 16th century was creating a general formula using radicals to express the solution of any polynomial equation of fixed degree k, where k ≥ 5. In the 1820s, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) showed this to be impossible, using concepts such as solvable groups from Galois theory—a new sub-field of abstract algebra. Some of the most important proofs of impossibility found in the 20th century were those related to undecidability" https://en.wikipedia.org/wiki/List%20of%20numerical-analysis%20software,"Listed here are notable end-user computer applications intended for use with numerical or data analysis: Numerical-software packages General-purpose computer algebra systems Interface-oriented Language-oriented Historically significant Expensive Desk Calculator written for the TX-0 and PDP-1 in the late 1950s or early 1960s. S is an (array-based) programming language with strong numerical support. R is an implementation of the S language. See also" https://en.wikipedia.org/wiki/Eb/N0,"{{DISPLAYTITLE:Eb/N0}} In digital communication or data transmission, (energy per bit to noise power spectral density ratio) is a normalized signal-to-noise ratio (SNR) measure, also known as the ""SNR per bit"". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account. As the description implies, is the signal energy associated with each user data bit; it is equal to the signal power divided by the user bit rate (not the channel symbol rate). If signal power is in watts and bit rate is in bits per second, is in units of joules (watt-seconds). is the noise spectral density, the noise power in a 1 Hz bandwidth, measured in watts per hertz or joules. These are the same units as so the ratio is dimensionless; it is frequently expressed in decibels. directly indicates the power efficiency of the system without regard to modulation type, error correction coding or signal bandwidth (including any use of spread spectrum). This also avoids any confusion as to which of several definitions of ""bandwidth"" to apply to the signal. But when the signal bandwidth is well defined, is also equal to the signal-to-noise ratio (SNR) in that bandwidth divided by the ""gross"" link spectral efficiency in bit/s⋅Hz, where the bits in this context again refer to user data bits, irrespective of error correction information and modulation type. must be used with care on interference-limited channels since additive white noise (with constant noise density ) is assumed, and interference is not always noise-like. In spread spectrum systems (e.g., CDMA), the interference is sufficiently noise-like that it can be represented as and added to the thermal noise to produce the overall ratio . Relation to carrier-to-noise ratio is closely related to the carrier-to-noise ratio (CNR or ), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection" https://en.wikipedia.org/wiki/Square-law%20detector,"In electronic signal processing, a square law detector is a device that produces an output proportional to the square of some input. For example, in demodulating radio signals, a semiconductor diode can be used as a square law detector, providing an output current proportional to the square of the amplitude of the input voltage over some range of input amplitudes. A square law detector provides an output directly proportional to the power of the input electrical signal." https://en.wikipedia.org/wiki/Thomas%20Baxter%20%28mathematician%29,"Thomas Baxter ( 1732–1740), was a schoolmaster and mathematician who published an erroneous method of squaring the circle. He was derided as a ""pseudo-mathematician"" by F. Y. Edgeworth, writing for the Dictionary of National Biography. When he was master of a private school at Crathorne, North Yorkshire, Baxter composed a book entitled The Circle squared (London: 1732), published in octavo. The mathematical book begins with the untrue assertion that ""if the diameter of a circle be unity or one, the circumference of that circle will be 3.0625"", where the value should correctly be pi. From this incorrect assumption, Baxter proves fourteen geometric theorems on circles, alongside some others on cones and ellipses, which Edgeworth refers to as of ""equal absurdity"" to Baxter's other assertions. Thomas Gent, who published the work, wrote in his reminisces, in The Life of Mr. Thomas Gent, that ""as it never proved of any effect, it was converted to waste paper, to the great mortification of the author"". This book has received harsh reviews from modern mathematicians and scholars. Antiquary Edward Peacock referred to it as ""no doubt, great rubbish"". Mathematician Augustus De Morgan included Baxter's proof among his Budget of Paradoxes (1872), dismissing it as an absurd work. The work was the reason Edgeworth gave Baxter the epithet, ""pseudo-mathematician"". Baxter published another work, Matho, or the Principles of Astronomy and Natural Philosophy accommodated to the Use of Younger Persons (London: 1740). Unlike Baxter's other work, this volume enjoyed considerable popularity in its time." https://en.wikipedia.org/wiki/Computer%20hardware,"Computer hardware includes the physical parts of a computer, such as the case, central processing unit (CPU), random access memory (RAM), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is the set of instructions that can be stored and run by hardware. Hardware is so-termed because it is ""hard"" or rigid with respect to changes, whereas software is ""soft"" because it is easy to change. Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware. Von Neumann architecture The template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system. Types of computer systems Personal computer The personal computer is one of the most common types of computer due to its versatility and relatively low price. Desktop personal computers have a monitor, a keyboard, a mouse, and a computer case. The computer case holds the motherboard, fixed or removable disk drives for data storage, the power supply, and may contain other peripheral devices such as modems or network interfaces. Some models of desktop computers integrated the monitor and keyboard into the same case as the" https://en.wikipedia.org/wiki/Network%20browser,"A network browser is a tool used to browse a computer network. An example of this is My Network Places (or Network Neighborhood in earlier versions of Microsoft Windows). An actual program called Network Browser is offered in Mac OS 9. See also Browser service Computer networking" https://en.wikipedia.org/wiki/List%20of%20named%20matrices,"This article lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. Important examples include the identity matrix given by and the zero matrix of dimension . For example: . Further ways of classifying matrices are according to their eigenvalues, or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry, have particular matrices that are applied chiefly in these areas. Constant matrices The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker delta δij for two integers i and j which is 1 if i = j and 0 else. Specific patterns for entries The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal). Matrices satisfying some equations A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix" https://en.wikipedia.org/wiki/In-band%20control,"In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP. This is as opposed to Out-of-band control used by protocols such as FTP. Example Here is an example of an SMTP client-server interaction: Server: 220 example.com Client: HELO example.net Server: 250 Hello example.net, pleased to meet you Client: MAIL FROM: Server: 250 jane.doe@example.net... Sender ok Client: RCPT TO: Server: 250 john.doe@example.com ... Recipient ok Client: DATA Server: 354 Enter mail, end with ""."" on a line by itself Client: Do you like ketchup? Client: How about pickles? Client: . Server: 250 Message accepted for delivery Client: QUIT Server: 221 example.com closing connection SMTP is in-band because the control messages, such as ""HELO"" and ""MAIL FROM"", are sent in the same stream as the actual message content. See also Out-of-band control Computer networks" https://en.wikipedia.org/wiki/IEEE%201451,"IEEE 1451 is a set of smart transducer interface standards developed by the Institute of Electrical and Electronics Engineers (IEEE) Instrumentation and Measurement Society's Sensor Technology Technical Committee describing a set of open, common, network-independent communication interfaces for connecting transducers (sensors or actuators) to microprocessors, instrumentation systems, and control/field networks. One of the key elements of these standards is the definition of Transducer electronic data sheets (TEDS) for each transducer. The TEDS is a memory device attached to the transducer, which stores transducer identification, calibration, correction data, and manufacturer-related information. The goal of the IEEE 1451 family of standards is to allow the access of transducer data through a common set of interfaces whether the transducers are connected to systems or networks via a wired or wireless means. Transducer electronic data sheet A transducer electronic data sheet (TEDS) is a standardized method of storing transducer (sensors or actuators) identification, calibration, correction data, and manufacturer-related information. TEDS formats are defined in the IEEE 1451 set of smart transducer interface standards developed by the IEEE Instrumentation and Measurement Society's Sensor Technology Technical Committee that describe a set of open, common, network-independent communication interfaces for connecting transducers to microprocessors, instrumentation systems, and control/field networks. One of the key elements of the IEEE 1451 standards is the definition of TEDS for each transducer. The TEDS can be implemented as a memory device attached to the transducer and containing information needed by a measurement instrument or control system to interface with a transducer. TEDS can, however, be implemented in two ways. First, the TEDS can reside in embedded memory, typically an EEPROM, within the transducer itself which is connected to the measurement instrume" https://en.wikipedia.org/wiki/Node%20%28networking%29,"In telecommunications networks, a node (, ‘knot’) is either a redistribution point or a communication endpoint. The definition of a node depends on the network and protocol layer referred to. A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. A passive distribution point such as a distribution frame or patch panel is consequently not a node. Computer networks In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer. If the network in question is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address. If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes. Telecommunications In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location registe" https://en.wikipedia.org/wiki/Xputer,"The Xputer is a design for a reconfigurable computer, proposed by computer scientist Reiner Hartenstein. Hartenstein uses various terms to describe the various innovations in the design, including config-ware, flow-ware, morph-ware, and ""anti-machine"". The Xputer represents a move away from the traditional Von Neumann computer architecture, to a coarse-grained ""soft Arithmetic logic unit (ALU)"" architecture. Parallelism is achieved by configurable elements known as reconfigurable datapath arrays (rDPA), organized in a two-dimensional array of ALU's similar to the KressArray. Architecture The Xputer architecture is data-stream-based, and is the counterpart of the instruction-based von Neumann computer architecture. The Xputer architecture was one of the first coarse-grained reconfigurable architectures, and consists of a reconfigurable datapath array (rDPA) organized as a two-dimensional array of ALUs (rDPU). The bus-width between ALU's were 32-bit in the first version of the Xputer. The ALUs (also known as rDPUs) are used for computing a single mathematical operation, such as addition, subtraction or multiplication, and can also be used purely for routing. ALUs are mesh-connected via three types of connections, and data-flow along these connections are managed by an address generation unit. Nearest neighbour (connections between neighbouring ALUs) Row/column back-buses Global bus (a single global bus for interconnection between further ALUs) Programs for the Xputer are written in the C language, and compiled for usage on the Xputer using the CoDeX compiler written by the author. The CoDeX compiler maps suitable portions of the C program onto the Xputer's rDPA fabric. The remainder of the program is executed on the host system, such as a personal computer. rDPA A reconfigurable datapath array (rDPA) is a semiconductor device containing reconfigurable data path units and programmable interconnects, first proposed by Rainer Kress in 1993, at the University of K" https://en.wikipedia.org/wiki/Cahen%27s%20constant,"In mathematics, Cahen's constant is defined as the value of an infinite series of unit fractions with alternating signs: Here denotes Sylvester's sequence, which is defined recursively by Combining these fractions in pairs leads to an alternative expansion of Cahen's constant as a series of positive unit fractions formed from the terms in even positions of Sylvester's sequence. This series for Cahen's constant forms its greedy Egyptian expansion: This constant is named after (also known for the Cahen–Mellin integral), who was the first to introduce it and prove its irrationality. Continued fraction expansion The majority of naturally occurring mathematical constants have no known simple patterns in their continued fraction expansions. Nevertheless, the complete continued fraction expansion of Cahen's constant is known: it is where the sequence of coefficients is defined by the recurrence relation All the partial quotients of this expansion are squares of integers. Davison and Shallit made use of the continued fraction expansion to prove that is transcendental. Alternatively, one may express the partial quotients in the continued fraction expansion of Cahen's constant through the terms of Sylvester's sequence: To see this, we prove by induction on that . Indeed, we have , and if holds for some , then where we used the recursion for in the first step respectively the recursion for in the final step. As a consequence, holds for every , from which it is easy to conclude that . Best approximation order Cahen's constant has best approximation order . That means, there exist constants such that the inequality has infinitely many solutions , while the inequality has at most finitely many solutions . This implies (but is not equivalent to) the fact that has irrationality measure 3, which was first observed by . To give a proof, denote by the sequence of convergents to Cahen's constant (that means, ). But now it follows from and the recursi" https://en.wikipedia.org/wiki/Mathematical%20Cranks,"Mathematical Cranks is a book on pseudomathematics and the cranks who create it, written by Underwood Dudley. It was published by the Mathematical Association of America in their MAA Spectrum book series in 1992 (). Topics Previously, Augustus De Morgan wrote in A Budget of Paradoxes about cranks in multiple subjects, and Dudley wrote a book about angle trisection. However, this is the first book to focus on mathematical crankery as a whole. The book consists of 57 essays, loosely organized by the most common topics in mathematics for cranks to focus their attention on. The ""top ten"" of these topics, as listed by reviewer Ian Stewart, are, in order: squaring the circle, angle trisection, Fermat's Last Theorem, non-Euclidean geometry and the parallel postulate, the golden ratio, perfect numbers, the four color theorem, advocacy for duodecimal and other non-standard number systems, Cantor's diagonal argument for the uncountability of the real numbers, and doubling the cube. Other common topics for crankery, collected by Dudley, include calculations for the perimeter of an ellipse, roots of quintic equations, Fermat's little theorem, Gödel's incompleteness theorems, Goldbach's conjecture, magic squares, divisibility rules, constructible polygons, twin primes, set theory, statistics, and the Van der Pol oscillator. As David Singmaster writes, many of these topics are the subject of mainstream mathematics ""and only become crankery in extreme cases"". The book omits or passes lightly over other topics that apply mathematics to crankery in other areas, such as numerology and pyramidology. Its attitude towards the cranks it covers is one of ""sympathy and understanding"", and in order to keep the focus on their crankery it names them only by initials. The book also attempts to analyze the motivation and psychology behind crankery, and to provide advice to professional mathematicians on how to respond to cranks. Despite his work on the subject, which has ""become " https://en.wikipedia.org/wiki/Interdigitation,"Interdigitation is the interlinking of biological components that resembles the fingers of two hands being locked together. It can be a naturally occurring or man-made state. Examples Naturally occurring interdigitation includes skull sutures that develop during periods of brain growth, and which remain thin and straight, and later develop complex fractal interdigitations that provide interlocking strength. A layer of the retina where photoreception occurs is called the interdigitation zone. Adhesion or diffusive bonding occurs when sections of polymer chains from one surface interdigitate with those of an adjacent surface. In the dermis, dermal papillae (DP) (singular papilla, diminutive of Latin papula, 'pimple') are small, nipple-like extensions of the dermis into the epidermis, also known as interdigitations. The distal convoluted tubule (DCT), a portion of kidney nephron, can be recognized by several distinct features, including lateral membrane interdigitations with neighboring cells. Some hypotheses contend that crown shyness, the interdigitation of canopy branches, leads to ""reciprocal pruning"" of adjacent trees. Interdigitation is also found in biological research. Interdigitation fusion is a method of preparing calcium- and phosphate-loaded liposomes. Drugs inserted in the bilayer biomembrane may influence the lateral organization of the lipid membrane, with interdigitation of the membrane to fill volume voids. A similar interdigitation process involves investigating dissipative particle dynamics (DPD) simulations by adding alcohol molecules to the bilayers of double-tail lipids. Pressure-induced interdigitation is used to study hydrostatic pressure of bicellular dispersions containing anionic lipids." https://en.wikipedia.org/wiki/Phenomics,"Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist" https://en.wikipedia.org/wiki/Upper%20and%20lower%20bounds,"In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an element of that is less than or equal to every element of . A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound. The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds. Examples For example, is a lower bound for the set (as a subset of the integers or of the real numbers, etc.), and so is . On the other hand, is not a lower bound for since it is not smaller than every element in . The set has as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that . Every subset of the natural numbers has a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of the rational numbers may or may not be bounded from below, and may or may not be bounded from above. Every finite subset of a non-empty totally ordered set has both upper and lower bounds. Bounds of functions The definitions can be generalized to functions and even to sets of functions. Given a function with domain and a preordered set as codomain, an element of is an upper bound of if for each in . The upper bound is called sharp if equality holds for at least one value of . It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality. Similarly, a function defined on domain and having the same codomain is an upper bound of , if for each in " https://en.wikipedia.org/wiki/Secure%20element,"A secure element (SE) is a secure operating system (OS) in a tamper-resistant processor chip or secure component. It can protect assets (root of trust, sensitive data, keys, certificates, applications) against high level software and hardware attacks. Applications that process this sensitive data on an SE are isolated and so operate within a controlled environment not impacted by software (including possible malware) found elsewhere on the OS. The hardware and embedded software meet the requirements of the Security IC Platform Protection Profile [PP 0084] including resistance to physical tampering scenarios described within it. More than 96 billion secure elements have been produced and shipped between 2010 and 2021. SEs exist in different form factors; as devices such as smart card, SIM/UICC, smart microSD, or as part of a larger device as an embedded or integrated SE. SEs are an evolution of the traditional chip that was powering smart cards, which have been adapted to suit the needs of numerous use cases, such as smartphones, tablets, set top boxes, wearables, connected cars, and other internet of things (IoT) devices. The technology is widely used by technology firms such as Oracle, Apple and Samsung. SEs provide secure isolation, storage and processing for applications (called applets) they host while being isolated from the external world (e.g. rich OS and application processor when embedded in a smartphone) and from other applications running on the SE. Java Card and MULTOS are the most deployed standardized multi-application operating systems currently used to develop applications running on SE. Since 1999, GlobalPlatform has been the body responsible for standardizing secure element technologies to support a dynamic model of application management in a multi actor model. GlobalPlatform also runs Functional and Security Certification programmes for secure elements, and hosts a list of Functional Certified and Security Certified products. GlobalPlatform t" https://en.wikipedia.org/wiki/Inequation,"In mathematics, an inequation is a statement that an inequality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are: In some cases, the term ""inequation"" can be considered synonymous to the term ""inequality"", while in other cases, an inequation is reserved only for statements whose inequality relation is ""not equal to"" (≠). Chains of inequations A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain is shorthand for which also implies that and . In rare cases, chains without such implications about distant terms are used. For example is shorthand for , which does not imply Similarly, is shorthand for , which does not imply any order of and . Solving inequations Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. For example, is a conjunction of inequations, partly written as chains (where can be read as ""and""); the set of its solutions is shown in blue in the picture (the red, green, and orange line corre" https://en.wikipedia.org/wiki/Kernel%20principal%20component%20analysis,"In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space. Background: Linear PCA Recall that conventional PCA operates on zero-centered data; that is, , where is one of the multivariate observations. It operates by diagonalizing the covariance matrix, in other words, it gives an eigendecomposition of the covariance matrix: which can be rewritten as . (See also: Covariance matrix as a linear operator) Introduction of the Kernel to PCA To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot, in general, be linearly separated in dimensions, they can almost always be linearly separated in dimensions. That is, given N points, , if we map them to an N-dimensional space with where , it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition explicitly as we would in linear PCA. Instead, in kernel PCA, a non-trivial, arbitrary function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional 's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the -space, which we will call the 'feature space', we can create the N-by-N kernel which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the -space (see Kernel trick). The N-elements in each column of K represent the dot product of one point of the tr" https://en.wikipedia.org/wiki/Footprinting,"Footprinting (also known as reconnaissance) is the technique used for gathering information about computer systems and the entities they belong to. To get this information, a hacker might use various tools and technologies. This information is very useful to a hacker who is trying to crack a whole system. When used in the computer security lexicon, ""Footprinting"" generally refers to one of the pre-attack phases; tasks performed before doing the actual attack. Some of the tools used for Footprinting are Sam Spade, nslookup, traceroute, Nmap and neotrace. Techniques used DNS queries Network enumeration Network queries Operating system identification Software used Wireshark Uses It allows a hacker to gain information about the target system or network. This information can be used to carry out attacks on the system. That is the reason by which it may be named a Pre-Attack, since all the information is reviewed in order to get a complete and successful resolution of the attack. Footprinting is also used by ethical hackers and penetration testers to find security flaws and vulnerabilities within their own company's network before a malicious hacker does. Types There are two types of Footprinting that can be used: active Footprinting and passive Footprinting. Active Footprinting is the process of using tools and techniques, such as performing a ping sweep or using the traceroute command, to gather information on a target. Active Footprinting can trigger a target's Intrusion Detection System (IDS) and may be logged, and thus requires a level of stealth to successfully do. Passive Footprinting is the process of gathering information on a target by innocuous, or, passive, means. Browsing the target's website, visiting social media profiles of employees, searching for the website on WHOIS, and performing a Google search of the target are all ways of passive Footprinting. Passive Footprinting is the stealthier method since it will not trigger a target's IDS or otherwise " https://en.wikipedia.org/wiki/Data%20processing%20unit,"A data processing unit (DPU) is a programmable computer processor that tightly integrates a general-purpose CPU with network interface hardware. Sometimes they are called ""IPUs"" (for ""infrastructure processing unit"") or ""SmartNICs"". They can be used in place of traditional NICs to relieve the main CPU of complex networking responsibilities and other ""infrastructural"" duties; although their features vary, they may be used to perform encryption/decryption, serve as a firewall, handle TCP/IP, process HTTP requests, or even function as a hypervisor or storage controller. These devices can be attractive to cloud computing providers whose servers might otherwise spend a significant amount of CPU time on these tasks, cutting into the cycles they can provide to guests. See also Compute Express Link (CXL)" https://en.wikipedia.org/wiki/Touch%20%28American%20TV%20series%29,"Touch is an American drama television series that ran on Fox from January 25, 2012, to May 10, 2013. The series was created by Tim Kring and starred Kiefer Sutherland. During its first season the series aired regularly on Thursday nights beginning March 22, 2012. Thirteen episodes were ordered for the first season, with the two-episode season finale airing on Thursday, May 31, 2012. On May 9, 2012, Fox renewed the show for a second season. The second season was originally scheduled to begin Friday, October 26, 2012, but was pushed back to Friday, February 8, 2013. On May 9, 2013, Fox canceled the series after two seasons. Plot Touch centers on former reporter Martin Bohm (Kiefer Sutherland) and his 11-year-old son, Jake (David Mazouz), who has been diagnosed as autistic. Martin's wife died in the World Trade Center during the September 11 attacks, and he has been struggling to raise Jake since then, moving from job to job while tending to Jake's special needs. Jake has never spoken a word, but is fascinated by numbers and patterns relating to numbers, spending much of his days writing them down in notebooks or his touch-screen tablet and sometimes using objects (for instance popcorn kernels). Season 1 Jake's repeated escapes from special schools put Martin's capacity to raise the child in question, and social worker Clea Hopkins (Gugu Mbatha-Raw) arrives to perform an evaluation of Jake's living conditions. Martin, worried that he might lose his son, attempts to communicate with him, but the boy only continues to write down a specific pattern of numbers. This leads Martin to discover Professor Arthur Teller (Danny Glover), who has seen and worked with cases like this before, claiming that Jake is one of the few who can see the ""pain of the universe"" through the numbers. Teller also alludes to the interconnectivity of humanity as envisioned by the Chinese legend of the red string of fate, whereby actions, seen and unseen, can change the fate of people across the " https://en.wikipedia.org/wiki/List%20of%20numerical%20computational%20geometry%20topics,"List of numerical computational geometry topics enumerates the topics of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. This area is also called ""machine geometry"", computer-aided geometric design, and geometric modelling. See List of combinatorial computational geometry topics for another flavor of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character. Curves In the list of curves topics, the following ones are fundamental to geometric modelling. Parametric curve Bézier curve Spline Hermite spline Beta spline B-spline Higher-order spline NURBS Contour line Surfaces Bézier surface Isosurface Parametric surface Other Level-set method Computational topology Mathematics-related lists Geometric algorithms Geometry" https://en.wikipedia.org/wiki/Elliptic%20surface,"In mathematics, an elliptic surface is a surface that has an elliptic fibration, in other words a proper morphism with connected fibers to an algebraic curve such that almost all fibers are smooth curves of genus 1. (Over an algebraically closed field such as the complex numbers, these fibers are elliptic curves, perhaps without a chosen origin.) This is equivalent to the generic fiber being a smooth curve of genus one. This follows from proper base change. The surface and the base curve are assumed to be non-singular (complex manifolds or regular schemes, depending on the context). The fibers that are not elliptic curves are called the singular fibers and were classified by Kunihiko Kodaira. Both elliptic and singular fibers are important in string theory, especially in F-theory. Elliptic surfaces form a large class of surfaces that contains many of the interesting examples of surfaces, and are relatively well understood in the theories of complex manifolds and smooth 4-manifolds. They are similar to (have analogies with, that is), elliptic curves over number fields. Examples The product of any elliptic curve with any curve is an elliptic surface (with no singular fibers). All surfaces of Kodaira dimension 1 are elliptic surfaces. Every complex Enriques surface is elliptic, and has an elliptic fibration over the projective line. Kodaira surfaces Dolgachev surfaces Shioda modular surfaces Kodaira's table of singular fibers Most of the fibers of an elliptic fibration are (non-singular) elliptic curves. The remaining fibers are called singular fibers: there are a finite number of them, and each one consists of a union of rational curves, possibly with singularities or non-zero multiplicities (so the fibers may be non-reduced schemes). Kodaira and Néron independently classified the possible fibers, and Tate's algorithm can be used to find the type of the fibers of an elliptic curve over a number field. The following table lists the possible fibers of a minimal el" https://en.wikipedia.org/wiki/Reverberation%20mapping,"Reverberation mapping (or Echo mapping) is an astrophysical technique for measuring the structure of the broad-line region (BLR) around a supermassive black hole at the center of an active galaxy, and thus estimating the hole's mass. It is considered a ""primary"" mass estimation technique, i.e., the mass is measured directly from the motion that its gravitational force induces in the nearby gas. Newton's law of gravity defines a direct relation between the mass of a central object and the speed of a smaller object in orbit around the central mass. Thus, for matter orbiting a black hole, the black-hole mass is related by the formula to the RMS velocity ΔV of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines. In this formula, RBLR is the radius of the broad-line region; G is the constant of gravitation; and f is a poorly known ""form factor"" that depends on the shape of the BLR. While ΔV can be measured directly using spectroscopy, the necessary determination of RBLR is much less straightforward. This is where reverberation mapping comes into play. It utilizes the fact that the emission-line fluxes vary strongly in response to changes in the continuum, i.e., the light from the accretion disk near the black hole. Put simply, if the brightness of the accretion disk varies, the emission lines, which are excited in response to the accretion disk's light, will ""reverberate"", that is, vary in response. But it will take some time for light from the accretion disk to reach the broad-line region. Thus, the emission-line response is delayed with respect to changes in the continuum. Assuming that this delay is solely due to light travel times, the distance traveled by the light, corresponding to the radius of the broad emission-line region, can be measured. Only a small handful (less than 40) of active galactic nuclei have been accurately ""mapped"" in this way. An alternative approach is to use " https://en.wikipedia.org/wiki/Lua,"Lua or LUA may refer to: Science and technology Lua (programming language) Latvia University of Agriculture Last universal ancestor, in evolution Ethnicity and language Lua people, of Laos Lawa people, of Thailand sometimes referred to as Lua Lua language (disambiguation), several languages (including Lua’) Luba-Kasai language, ISO 639 code Lai (surname) (賴), Chinese, sometimes romanised as Lua Places Tenzing-Hillary Airport (IATA code), in Lukla, Nepal One of the Duff Islands People Lua (goddess), a Roman goddess Saint Lua (died c 609) Lua Blanco (born 1987), Brazilian actress and singer Lua Getsinger (1871–1916) A member of Weki Meki band Other uses Lua (martial art), of Hawaii ""Lua"" (song), by Bright Eyes" https://en.wikipedia.org/wiki/Financial%20signal%20processing,"Financial signal processing is a branch of signal processing technologies which applies to signals within financial markets. They are often used by quantitative analysts to make best estimation of the movement of financial markets, such as stock prices, options prices, or other types of derivatives. History The modern start of financial signal processing is often credited to Claude Shannon. Shannon was the inventor of modern communication theory. He discovered the capacity of a communication channel by analyzing entropy of information. For a long time, financial signal processing technologies have been used by different hedge funds, such as Jim Simon's Renaissance Technologies. However, hedge funds usually do not reveal their trade secrets. Some early research results in this area are summarized by R.H. Tütüncü and M. Koenig and by T.M. Cover, J.A. Thomas. A.N. Akansu and M.U. Torun published the book in financial signal processing entitled A Primer for Financial Engineering: Financial Signal Processing and Electronic Trading. An edited volume on the subject with the title Financial Signal Processing and Machine Learning was also published. The first IEEE International Conference on Acoustics, Speech, and Signal Processing session on Financial Signal Processing was organized at ICASSP 2011 in Prague, Czech Republic. There were two special issues of IEEE Journal of Selected Topics in Signal Processing published on Signal Processing Methods in Finance and Electronic Trading in 2012, and on Financial Signal Processing and Machine Learning for Electronic Trading in 2016 in addition to the special section on Signal Processing for Financial Applications in IEEE Signal Processing Magazine appeared in 2011. Financial Signal Processing in Academia Recently, a new research group in Imperial College London has been formed which focuses on Financial Signal Processing as part of the Communication and Signal Processing Group of the Electrical and Electronic Engineering depa" https://en.wikipedia.org/wiki/Network%20domain,"A network domain is an administrative grouping of multiple private computer networks or local hosts within the same infrastructure. Domains can be identified using a domain name; domains which need to be accessible from the public Internet can be assigned a globally unique name within the Domain Name System (DNS). A domain controller is a server that automates the logins, user groups, and architecture of a domain, rather than manually coding this information on each host in the domain. It is common practice, but not required, to have the domain controller act as a DNS server. That is, it would assign names to hosts in the network based on their IP addresses. Example Half of the staff of Building A uses Network 1, . This network has the VLAN identifier of VLAN 10. The other half of the staff of Building A uses Network 2, . This network has the VLAN identifier of VLAN 20. All of the staff of Building B uses Network 3, . This has the VLAN identifier of VLAN 11. The router R1 serves as the gateway for all three networks, and the whole infrastructure is connected physically via ethernet. Network 2 and 3 are routed through R1 and have full access to each other. Network 1 is completely separate from the other two, and does not have access to either of them. Network 2 and 3 are therefore in the same network domain, while Network 1 is in its own network domain, albeit alone. A network administrator can then suitably name these network domains to match the infrastructure topology. Usage Use of the term network domain first appeared in 1965 and saw increasing usage beginning in 1985. It initially applied to the naming of radio stations based on broadcast frequency and geographic area. It entered its current usage by network theorists to describe solutions to the problems of subdividing a single homogeneous LAN and joining multiple networks, possibly constituted of different network architectures." https://en.wikipedia.org/wiki/SREC%20%28file%20format%29,"Motorola S-record is a file format, created by Motorola in the mid-1970s, that conveys binary information as hex values in ASCII text form. This file format may also be known as SRECORD, SREC, S19, S28, S37. It is commonly used for programming flash memory in microcontrollers, EPROMs, EEPROMs, and other types of programmable logic devices. In a typical application, a compiler or assembler converts a program's source code (such as C or assembly language) to machine code and outputs it into a HEX file. The HEX file is then imported by a programmer to ""burn"" the machine code into non-volatile memory, or is transferred to the target system for loading and execution. Overview History The S-record format was created in the mid-1970s for the Motorola 6800 processor. Software development tools for that and other embedded processors would make executable code and data in the S-record format. PROM programmers would then read the S-record format and ""burn"" the data into the PROMs or EPROMs used in the embedded system. Other hex formats There are other ASCII encoding with a similar purpose. BPNF, BHLF, and B10F were early binary formats, but they are neither compact nor flexible. Hexadecimal formats are more compact because they represent 4 bits rather than 1 bit per character. Many, such as S-record, are more flexible because they include address information so they can specify just a portion of a PROM. Intel HEX format was often used with Intel processors. TekHex is another hex format that can include a symbol table for debugging. Format Record structure An SREC format file consists of a series of ASCII text records. The records have the following structure from left to right: Record start - each record begins with an uppercase letter ""S"" character (ASCII 0x53) which stands for ""Start-of-Record"". Record type - single numeric digit ""0"" to ""9"" character (ASCII 0x30 to 0x39), defining the type of record. See table below. Byte count - two hex digits (""00"" to ""FF""), ind" https://en.wikipedia.org/wiki/List%20of%20probabilistic%20proofs%20of%20non-probabilistic%20theorems,"Probability theory routinely uses results from other fields of mathematics (mostly, analysis). The opposite cases, collected below, are relatively rare; however, probability theory is used systematically in combinatorics via the probabilistic method. They are particularly used for non-constructive proofs. Analysis Normal numbers exist. Moreover, computable normal numbers exist. These non-probabilistic existence theorems follow from probabilistic results: (a) a number chosen at random (uniformly on (0,1)) is normal almost surely (which follows easily from the strong law of large numbers); (b) some probabilistic inequalities behind the strong law. The existence of a normal number follows from (a) immediately. The proof of the existence of computable normal numbers, based on (b), involves additional arguments. All known proofs use probabilistic arguments. Dvoretzky's theorem which states that high-dimensional convex bodies have ball-like slices is proved probabilistically. No deterministic construction is known, even for many specific bodies. The diameter of the Banach–Mazur compactum was calculated using a probabilistic construction. No deterministic construction is known. The original proof that the Hausdorff–Young inequality cannot be extended to is probabilistic. The proof of the de Leeuw–Kahane–Katznelson theorem (which is a stronger claim) is partially probabilistic. The first construction of a Salem set was probabilistic. Only in 1981 did Kaufman give a deterministic construction. Every continuous function on a compact interval can be uniformly approximated by polynomials, which is the Weierstrass approximation theorem. A probabilistic proof uses the weak law of large numbers. Non-probabilistic proofs were available earlier. Existence of a nowhere differentiable continuous function follows easily from properties of Wiener process. A non-probabilistic proof was available earlier. Stirling's formula was first discovered by Abraham de Moivre in his `The D" https://en.wikipedia.org/wiki/Biosignal,"A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well. Electrical biosignals Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are: Electroencephalogram (EEG) Electrocardiogram (ECG) Electromyogram (EMG) Electrooculogram (EOG) Electroretinogram (ERG) Electrogastrogram (EGG) Galvanic skin response (GSR) or electrodermal activity (EDA) EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain. With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns. Electrical currents and changes in electrical resistances across tissues can also be measured from plants. Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, bre" https://en.wikipedia.org/wiki/Ptolemaic%20graph,"In graph theory, a Ptolemaic graph is an undirected graph whose shortest path distances obey Ptolemy's inequality, which in turn was named after the Greek astronomer and mathematician Ptolemy. The Ptolemaic graphs are exactly the graphs that are both chordal and distance-hereditary; they include the block graphs and are a subclass of the perfect graphs. Characterization A graph is Ptolemaic if and only if it obeys any of the following equivalent conditions: The shortest path distances obey Ptolemy's inequality: for every four vertices , , , and , the inequality holds. For instance, the gem graph (3-fan) in the illustration is not Ptolemaic, because in this graph , greater than . For every two overlapping maximal cliques, the intersection of the two cliques is a separator that splits the differences of the two cliques. In the illustration of the gem graph, this is not true: cliques and are not separated by their intersection, , because there is an edge that connects the cliques but avoids the intersection. Every -vertex cycle has at least diagonals. The graph is both chordal (every cycle of length greater than three has a diagonal) and distance-hereditary (every connected induced subgraph has the same distances as the whole graph). The gem shown is chordal but not distance-hereditary: in the subgraph induced by , the distance from to is 3, greater than the distance between the same vertices in the whole graph. Because both chordal and distance-hereditary graphs are perfect graphs, so are the Ptolemaic graphs. The graph is chordal and does not contain an induced gem, a graph formed by adding two non-crossing diagonals to a pentagon. The graph is distance-hereditary and does not contain an induced 4-cycle. The graph can be constructed from a single vertex by a sequence of operations that add a new degree-one (pendant) vertex, or duplicate (twin) an existing vertex, with the exception that a twin operation in which the new duplicate vertex is not adjacent to its" https://en.wikipedia.org/wiki/Hardware%20compatibility%20list,"A hardware compatibility list (HCL) is a list of computer hardware (typically including many types of peripheral devices) that is compatible with a particular operating system or device management software. The list contains both whole computer systems and specific hardware elements including motherboards, sound cards, and video cards. In today's world, there is a vast amount of computer hardware in circulation, and many operating systems too. A hardware compatibility list is a database of hardware models and their compatibility with a certain operating system. HCLs can be centrally controlled (one person or team keeps the list of hardware maintained) or user-driven (users submit reviews on hardware they have used). There are many HCLs. Usually, each operating system will have an official HCL on its website. See also System requirements" https://en.wikipedia.org/wiki/X2%20transceiver,"The X2 transceiver format is a 10 gigabit per second modular fiber optic interface intended for use in routers, switches and optical transport platforms. It is an early generation 10 gigabit interface related to the similar XENPAK and XPAK formats. X2 may be used with 10 gigabit ethernet or OC-192/STM-64 speed SDH/SONET equipment. X2 modules are smaller and consume less power than first generation XENPAK modules, but larger and consume more energy than the newer XFP transceiver standard and SFP+ standards. As of 2016 this format is relatively uncommon and has been replaced by 10Gbit/s SFP+ in most new equipment." https://en.wikipedia.org/wiki/Common%20spatial%20pattern,"Common spatial pattern (CSP) is a mathematical procedure used in signal processing for separating a multivariate signal into additive subcomponents which have maximum differences in variance between two windows. Details Let of size and of size be two windows of a multivariate signal, where is the number of signals and and are the respective number of samples. The CSP algorithm determines the component such that the ratio of variance (or second-order moment) is maximized between the two windows: The solution is given by computing the two covariance matrices: Then, the simultaneous diagonalization of those two matrices (also called generalized eigenvalue decomposition) is realized. We find the matrix of eigenvectors and the diagonal matrix of eigenvalues sorted by decreasing order such that: and with the identity matrix. This is equivalent to the eigendecomposition of : will correspond to the first column of : Discussion Relation between variance ratio and eigenvalue The eigenvectors composing are components with variance ratio between the two windows equal to their corresponding eigenvalue: Other components The vectorial subspace generated by the first eigenvectors will be the subspace maximizing the variance ratio of all components belonging to it: On the same way, the vectorial subspace generated by the last eigenvectors will be the subspace minimizing the variance ratio of all components belonging to it: Variance or second-order moment CSP can be applied after a mean subtraction (a.k.a. ""mean centering"") on signals in order to realize a variance ratio optimization. Otherwise CSP optimizes the ratio of second-order moment. Choice of windows X1 and X2 The standard use consists on choosing the windows to correspond to two periods of time with different activation of sources (e.g. during rest and during a specific task). It is also possible to choose the two windows to correspond to two different frequency bands in order t" https://en.wikipedia.org/wiki/Cross-correlation%20matrix,"The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in various digital signal processing algorithms. Definition For two random vectors and , each containing random elements whose expected value and variance exist, the cross-correlation matrix of and is defined by and has dimensions . Written component-wise: The random vectors and need not have the same dimension, and either might be a scalar value. Example For example, if and are random vectors, then is a matrix whose -th entry is . Complex random vectors If and are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of and is defined by where denotes Hermitian transposition. Uncorrelatedness Two random vectors and are called uncorrelated if They are uncorrelated if and only if their cross-covariance matrix matrix is zero. In the case of two complex random vectors and they are called uncorrelated if and Properties Relation to the cross-covariance matrix The cross-correlation is related to the cross-covariance matrix as follows: Respectively for complex random vectors: See also Autocorrelation Correlation does not imply causation Covariance function Pearson product-moment correlation coefficient Correlation function (astronomy) Correlation function (statistical mechanics) Correlation function (quantum field theory) Mutual information Rate distortion theory Radial distribution function" https://en.wikipedia.org/wiki/Information-centric%20networking,"Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information (or content or data). Some of the application areas of ICN are in web applications, multimedia streaming, the Internet of Things, Wireless Sensor Networks and Vehicular networks and emerging applications such as social networks, Industrial IoTs. In this paradigm, connectivity may well be intermittent, end-host and in-network storage can be capitalized upon transparently, as bits in the network and on data storage devices have exactly the same value, mobility and multi access are the norm and anycast, multicast, and broadcast are natively supported. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth demand and better robustness in challenging communication scenarios. In information-centric networking the cache is a network level solution, and it has rapidly changing cache states, higher request arrival rates and smaller cache sizes. In particular, information-centric networking caching policies should be fast and lightweight. IRTF Working Group The Internet Research Task Force (IRTF) is sponsoring a research group on Information-Centric Networking Research, which serves as a forum for the exchange and analysis of ICN research ideas and proposals. Current and future work items and outputs are managed on the ICNRG wiki." https://en.wikipedia.org/wiki/List%20of%20impossible%20puzzles,"This is a list of puzzles that cannot be solved. An impossible puzzle is a puzzle that cannot be resolved, either due to lack of sufficient information, or any number of logical impossibilities. 15 puzzle – Slide fifteen numbered tiles into numerical order. Impossible for half of the starting positions. Five room puzzle – Cross each wall of a diagram exactly once with a continuous line. MU puzzle – Transform the string to according to a set of rules. Mutilated chessboard problem – Place 31 dominoes of size 2×1 on a chessboard with two opposite corners removed. Coloring the edges of the Petersen graph with three colors. Seven Bridges of Königsberg – Walk through a city while crossing each of seven bridges exactly once. Three cups problem – Turn three cups right-side up after starting with one wrong and turning two at a time. Three utilities problem – Connect three cottages to gas, water, and electricity without crossing lines. Thirty-six officers problem – Arrange six regiments consisting of six officers each of different ranks in a 6 × 6 square so that no rank or regiment is repeated in any row or column. See also Impossible Puzzle, or ""Sum and Product Puzzle"", which is not impossible -gry, a word puzzle List of undecidable problems, no algorithm can exist to answer a yes–no question about the input Puzzles Mathematics-related lists" https://en.wikipedia.org/wiki/Wigner%20quasiprobability%20distribution,"The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution, after Eugene Wigner and Jean-André Ville) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space. It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction . Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (see Weyl quantization). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram. In 1949, José Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields, such as electrical engineering, seismology, time–frequency analysis for music signals, spectrograms in biology and speech processing, and engine design. Relation to classical mechanics A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails for a quantum p" https://en.wikipedia.org/wiki/Sophomore%27s%20dream,"In mathematics, the sophomore's dream is the pair of identities (especially the first) discovered in 1697 by Johann Bernoulli. The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively. The name ""sophomore's dream"" is in contrast to the name ""freshman's dream"" which is given to the incorrect identity The sophomore's dream has a similar too-good-to-be-true feel, but is true. Proof The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are: to write (using the notation for the natural logarithm and for the exponential function); to expand using the power series for ; and to integrate termwise, using integration by substitution. In details, can be expanded as Therefore, By uniform convergence of the power series, one may interchange summation and integration to yield To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity By Euler's integral identity for the Gamma function, one has so that Summing these (and changing indexing so it starts at instead of ) yields the formula. Historical proof The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms. The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing" https://en.wikipedia.org/wiki/Index%20of%20software%20engineering%20articles,"This is an alphabetical list of articles pertaining specifically to software engineering. 0–9 2D computer graphics — 3D computer graphics A Abstract syntax tree — Abstraction — Accounting software — Ada — Addressing mode — Agile software development — Algorithm — Anti-pattern — Application framework — Application software — Artificial intelligence — Artificial neural network — ASCII — Aspect-oriented programming — Assembler — Assembly language — Assertion — Automata theory — Automotive software — Avionics software B Backward compatibility — BASIC — BCPL — Berkeley Software Distribution — Beta test — Boolean logic — Business software C C — C++ — C# — CAD — Canonical model — Capability Maturity Model — Capability Maturity Model Integration — COBOL — Code coverage — Cohesion — Compilers — Complexity — Computation — Computational complexity theory — Computer — Computer-aided design — Computer-aided manufacturing — Computer architecture — Computer bug — Computer file — Computer graphics — Computer model — Computer multitasking — Computer programming — Computer science — Computer software — Computer term etymologies — Concurrent programming — Configuration management — Coupling — Cyclomatic complexity D Data structure — Data-structured language — Database — Dead code — Decision table — Declarative programming — Design pattern — Development stage — Device driver — Disassembler — Disk image — Domain-specific language E EEPROM — Electronic design automation — Embedded system — Engineering — Engineering model — EPROM — Even-odd rule — Expert system — Extreme programming F FIFO (computing and electronics) — File system — Filename extension — Finite-state machine — Firmware — Formal methods — Forth — Fortran — Forward compatibility — Functional decomposition — Functional design — Functional programming G Game development — Game programming — Game tester — GIMP Toolkit — Graphical user interface H Hierarchical database — High-level language — Hoare logic — Human–compute" https://en.wikipedia.org/wiki/Surface%20stress,"Surface stress was first defined by Josiah Willard Gibbs (1839-1903) as the amount of the reversible work per unit area needed to elastically stretch a pre-existing surface. A suggestion is surface stress define as association with the amount of the reversible work per unit area needed to elastically stretch a pre-existing surface instead of up definition. A similar term called ""surface free energy"", which represents the excess free energy per unit area needed to create a new surface, is easily confused with ""surface stress"". Although surface stress and surface free energy of liquid–gas or liquid–liquid interface are the same, they are very different in solid–gas or solid–solid interface, which will be discussed in details later. Since both terms represent a force per unit length, they have been referred to as ""surface tension"", which contributes further to the confusion in the literature. Thermodynamics of surface stress Definition of surface free energy is seemly the amount of reversible work performed to create new area of surface, expressed as: Gibbs was the first to define another surface quantity, different from the surface tension , that is associated with the reversible work per unit area needed to elastically stretch a pre-existing surface. Surface stress can be derived from surface free energy as follows: One can define a surface stress tensor that relates the work associated with the variation in , the total excess free energy of the surface, owing to the strain : Now consider the two reversible paths showed in figure 0. The first path (clockwise), the solid object is cut into two same pieces. Then both pieces are elastically strained. The work associated with the first step (unstrained) is , where and are the excess free energy and area of each of new surfaces. For the second step, work (), equals the work needed to elastically deform the total bulk volume and the four (two original and two newly formed) surfaces. In the second path (counter" https://en.wikipedia.org/wiki/Potentiometric%20surface,"A potentiometric surface is the imaginary plane where a given reservoir of fluid will ""equalize out to"" if allowed to flow. A potentiometric surface is based on hydraulic principles. For example, two connected storage tanks with one full and one empty will gradually fill/drain to the same level. This is because of atmospheric pressure and gravity. This idea is heavily used in city water supplies - a tall water tower containing the water supply has a great enough potentiometric surface to provide flowing water at a decent pressure to the houses it supplies. For groundwater ""potentiometric surface"" is a synonym of ""piezometric surface"" which is an imaginary surface that defines the level to which water in a confined aquifer would rise were it completely pierced with wells. If the potentiometric surface lies above the ground surface, a flowing artesian well results. Contour maps and profiles of the potentiometric surface can be prepared from the well data. See also Hydraulic head" https://en.wikipedia.org/wiki/Low-pass%20filter,"A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter. In optics, high-pass and low-pass may have different meanings, depending on whether referring to the frequency or wavelength of light, since these variables are inversely related. High-pass frequency filters would act as low-pass wavelength filters, and vice versa. For this reason, it is a good practice to refer to wavelength filters as short-pass and long-pass to avoid confusion, which would correspond to high-pass and low-pass frequencies. Low-pass filters exist in many different forms, including electronic circuits such as a hiss filter used in audio, anti-aliasing filters for conditioning signals before analog-to-digital conversion, digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The moving average operation used in fields such as finance is a particular kind of low-pass filter and can be analyzed with the same signal processing techniques as are used for other low-pass filters. Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations and leaving the longer-term trend. Filter designers will often use the low-pass form as a prototype filter. That is a filter with unity bandwidth and impedance. The desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform (that is, low-pass, high-pass, band-pass or band-stop). Examples Examples of low-pass filters occur in acoustics, optics and electronics. A stiff physical barrier tends to reflect higher sound frequencies, acting as an acoustic low-pass filter for" https://en.wikipedia.org/wiki/Time-invariant%20system,"In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a ""time-varying system"". Mathematically speaking, ""time-invariance"" of a system is the following property: Given a system with a time-dependent output function , and a time-dependent input function , the system will be considered time-invariant if a time-delay on the input directly equates to a time-delay of the output function. For example, if time is ""elapsed time"", then ""time-invariance"" implies that the relationship between the input function and the output function is constant with respect to time In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output. In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay. If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems. Simple example To demonstrate how to determine if a syst" https://en.wikipedia.org/wiki/Superscalar%20processor,"A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic unit. While a superscalar CPU is typically also pipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU): Instructions are issued from a sequential instruction stream The CPU dynamically checks for data dependencies between instructions at run time (versus software checking at compile time) The CPU can execute multiple instructions per clock cycle History Seymour Cray's CDC 6600 from 1964 is often mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe. The Intel i960CA (1989), the AMD 29000-series 29050 (1990), and the Motorola MC88110 (1991), microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be us" https://en.wikipedia.org/wiki/Log%20Gabor%20filter,"In signal processing it is useful to simultaneously analyze the space and frequency characteristics of a signal. While the Fourier transform gives the frequency information of the signal, it is not localized. This means that we cannot determine which part of a (perhaps long) signal produced a particular frequency. It is possible to use a short time Fourier transform for this purpose, however the short time Fourier transform limits the basis functions to be sinusoidal. To provide a more flexible space-frequency signal decomposition several filters (including wavelets) have been proposed. The Log-Gabor filter is one such filter that is an improvement upon the original Gabor filter. The advantage of this filter over the many alternatives is that it better fits the statistics of natural images compared with Gabor filters and other wavelet filters. Applications The Log-Gabor filter is able to describe a signal in terms of the local frequency responses. Because this is a fundamental signal analysis technique, it has many applications in signal processing. Indeed, any application that uses Gabor filters, or other wavelet basis functions may benefit from the Log-Gabor filter. However, there may not be any benefit depending on the particulars of the design problem. Nevertheless, the Log-Gabor filter has been shown to be particularly useful in image processing applications, because it has been shown to better capture the statistics of natural images. In image processing, there are a few low-level examples of the use of Log-Gabor filters. Edge detection is one such primitive operation, where the edges of the image are labeled. Because edges appear in the frequency domain as high frequencies, it is natural to use a filter such as the Log-Gabor to pick out these edges. These detected edges can be used as the input to a segmentation algorithm or a recognition algorithm. A related problem is corner detection. In corner detection the goal is to find points in the image that are c" https://en.wikipedia.org/wiki/WSDMA,"WSDMA (Wideband Space Division Multiple Access) is a high bandwidth channel access method, developed for multi-transceiver systems such as active array antennas. WSDMA is a beamforming technique suitable for overlay on the latest air-interface protocols including WCDMA and OFDM. WSDMA enabled systems can determine the angle of arrival (AoA) of received signals to spatially divide a cell sector into many sub-sectors. This spatial awareness provides information necessary to maximise Carrier to Noise+Interference Ratio (CNIR) link budget, through a range of digital processing routines. WSDMA facilitates a flexible approach to how uplink and downlink beamforming is performed and is capable of spatial filtering known interference generating locations. Key features Transmit and receive beam shaping and steering Multiple sub-sector path processing Spatial interference filtering Sector activity scan Characteristics and principles of operation Active Panel Antenna Calibration Active Panel Antenna systems, comprising a planar array of micro-radios and associated antenna element, rely upon a comprehensive calibration scheme which is able to correct inter-path signal mismatches in phase, amplitude and latency. This facilitates precise control of the uplink and downlink RF beam pattern and avoids distortion effects that occur in the absence of calibration. Multiple Sub-Sector path processing By dividing the cell sector into a number of sub-sector beams, WSDMA provides the network with spatially filtered signals, maximising link budget through improved antenna gain and interference mitigation. This allows for mobile users in the cell to reduce their uplink power transmission, thereby further reducing interference and minimising both base station and UE power consumption. WSDMA provides simultaneous sector-wide and sub-sector beam processing to improve link performance in multipath environments. Sub-sector beam processing can optimise changing user demographics within th" https://en.wikipedia.org/wiki/Equals%20Pi,"Equals Pi is a painting created by American artist Jean-Michel Basquiat in 1982. The painting was published in GQ magazine in 1983 and W magazine in 2018. History Equals Pi was executed by Jean-Michel Basquiat in 1982, which is considered his most coveted year. The robin egg blue painting contains Basquiat's signature crown motif and a head alongside his characteristic scrawled text with phrases such as ""AMORITE,"" ""TEN YEN"" and ""DUNCE."" The title refers to the mathematical equations incorporated on the right side of the work. The cone refers to the pointed dunce caps depicted in the work. The painting was acquired in 1982 by Anne Dayton, who was the advertising manager of Artforum magazine. She purchase it for $7,000 from Basquiat's exhibition at the Fun Gallery in the East Village. At the time the painting was called Still Pi, however, when the work appeared in the March 1983 issue of GQ magazine, it was titled Knowledge of the Cone, which is written on the top of the painting. According to reports in August 2021, the luxury jewelry brand Tiffany & Co. had recently acquired the painting privately from the Sabbadini family, for a price in the range of $15 million to $20 million. The painting, which is the brand's signature blue color, is displayed in the Tiffany & Co. Landmark store on Fifth Avenue in New York City. Although initial reports claimed that the painting was never seen before, it was previously offered at auction twice and had appeared in magazines. The work was first offered at a Sotheby's sale in London in June 1990, where it went unsold. In December 1996, the Sabbadinis, a Milan-based clan behind the eponymous jewelry house, purchased it during a Sotheby's London auction for $253,000. Mother and daughter Stefania and Micól Sabbadini posed in front of the painting in their living room for a 2018 feature in W magazine. Stephen Torton, a former assistant of Basquiat’s posted an Instagram statement saying, “I designed and built stretchers, painted ba" https://en.wikipedia.org/wiki/Dark%20current%20%28physics%29,"In physics and in electronic engineering, dark current is the relatively small electric current that flows through photosensitive devices such as a photomultiplier tube, photodiode, or charge-coupled device even when no photons enter the device; it consists of the charges generated in the detector when no outside radiation is entering the detector. It is referred to as reverse bias leakage current in non-optical devices and is present in all diodes. Physically, dark current is due to the random generation of electrons and holes within the depletion region of the device. The charge generation rate is related to specific crystallographic defects within the depletion region. Dark-current spectroscopy can be used to determine the defects present by monitoring the peaks in the dark current histogram's evolution with temperature. Dark current is one of the main sources for noise in image sensors such as charge-coupled devices. The pattern of different dark currents can result in a fixed-pattern noise; dark frame subtraction can remove an estimate of the mean fixed pattern, but there still remains a temporal noise, because the dark current itself has a shot noise. This dark current is the same that is studied in PN-Junction studies." https://en.wikipedia.org/wiki/MCDRAM,"Multi-Channel DRAM or MCDRAM (pronounced em cee dee ram) is a 3D-stacked DRAM that is used in the Intel Xeon Phi processor codenamed Knights Landing. It is a version of Hybrid Memory Cube developed in partnership with Micron Technology, and a competitor to High Bandwidth Memory. The many cores in the Xeon Phi processors, along with their associated vector processing units, enable them to consume many more gigabytes per second than traditional DRAM DIMMs can supply. The ""Multi-channel"" part of the MCDRAM full name reflects the cores having many more channels available to access the MCDRAM than processors have to access their attached DIMMs. This high channel count leads to MCDRAM's high bandwidth, up to 400+ GB/s, although the latencies are similar to a DIMM access. Its physical placement on the processor imposes some limits on capacity – up to 16 GB at launch, although speculated to go higher in the future. Programming The memory can be partitioned at boot time, with some used as cache for more distant DDR, and the remainder mapped into the physical address space. The application can request pages of virtual memory to be assigned to either the distant DDR directly, to the portion of DDR that is cached by the MCDRAM, or to the portion of the MCDRAM that is not being used as cache. One way to do this is via thememkind API. When used as cache, the latency of a miss accessing both the MCDRAM and DDR is slightly higher than going directly to DDR, and so applications may need to be tuned to avoid excessive cache misses." https://en.wikipedia.org/wiki/Gate%20array,"A gate array is an approach to the design and manufacture of application-specific integrated circuits (ASICs) using a prefabricated chip with components that are later interconnected into logic devices (e.g. NAND gates, flip-flops, etc.) according to custom order by adding metal interconnect layers in the factory. It was popular during the upheaval in the semiconductor industry in the 1980s, and its usage declined by the end of the 1990s. Similar technologies have also been employed to design and manufacture analog, analog-digital, and structured arrays, but, in general, these are not called gate arrays. Gate arrays have also been known as uncommitted logic arrays (ULAs), which also offered linear circuit functions, and semi-custom chips. History Development Gate arrays had several concurrent development paths. Ferranti in the UK pioneered commercializing bipolar ULA technology, offering circuits of ""100 to 10,000 gates and above"" by 1983. The company's early lead in semi-custom chips, with the initial application of a ULA integrated circuit involving a camera from Rollei in 1972, expanding to ""practically all European camera manufacturers"" as users of the technology, led to the company's dominance in this particular market throughout the 1970s. However, by 1982, as many as 30 companies had started to compete with Ferranti, reducing the company's market share to around 30 percent. Ferranti's ""major competitors"" were other British companies such as Marconi and Plessey, both of which had licensed technology from another British company, Micro Circuit Engineering. A contemporary initiative, UK5000, also sought to produce a CMOS gate array with ""5,000 usable gates"", with involvement from British Telecom and a number of other major British technology companies. IBM developed proprietary bipolar master slices that it used in mainframe manufacturing in the late 1970s and early 1980s, but never commercialized them externally. Fairchild Semiconductor also flirted brief" https://en.wikipedia.org/wiki/Harvard%20architecture,"The Harvard architecture is a computer architecture with separate storage and signal pathways for instructions and data. It is often contrasted with the von Neumann architecture, where program instructions and data share the same memory and pathways. The term is often stated as having originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself. However, in the only peer-reviewed published paper on the topic - The Myth of the Harvard Architecture published in the IEEE Annals of the History of Computing - the author demonstrates that: - 'The term “Harvard architecture” was coined decades later, in the context of microcontroller design' and only 'retrospectively applied to the Harvard machines and subsequently applied to RISC microprocessors with separated caches' - 'The so-called “Harvard” and “von Neumann” architectures are often portrayed as a dichotomy, but the various devices labeled as the former have far more in common with the latter than they do with each other.' - 'In short [the Harvard architecture] isn't an architecture and didn't derive from work at Harvard.' Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the same main memory as the data. For performance reasons, internally and largely invisible to the user, most designs have separate processor caches for the instructions and data, with separate pathways into the processor for each. This is one form of what is known as the modified Harvard architecture. Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle) " https://en.wikipedia.org/wiki/List%20of%20geodesic%20polyhedra%20and%20Goldberg%20polyhedra,"This is a list of selected geodesic polyhedra and Goldberg polyhedra, two infinite classes of polyhedra. Geodesic polyhedra and Goldberg polyhedra are duals of each other. The geodesic and Goldberg polyhedra are parameterized by integers m and n, with and . T is the triangulation number, which is equal to . Icosahedral Octahedral Tetrahedral" https://en.wikipedia.org/wiki/B5000%20instruction%20set,"The Burroughs B5000 was the first stack machine and also the first computer with a segmented virtual memory. The Burroughs B5000 instruction set includes the set of valid operations for the B5000, B5500 and B5700. It is not compatible with the B6500, B7500, B8500 or their successors. Instruction streams on a B5000 contain 12-bit syllables, four to a word. The architecture has two modes, Word Mode and Character Mode, and each has a separate repertoire of syllables. A processor may be either Control State or Normal State, and certain syllables are only permissible in Control State. The architecture does not provide for addressing registers or storage directly; all references are through the 1024 word Program Reference Table (PRT), current code segment, marked locations within the stack or to the A and B registers holding the top two locations on the stack. Burroughs numbers bits in a syllable from 0 (high bit) to 11 (low bit) and in a word from 0 (high bit) to 47 (low bit). Word Mode In Word Mode, there are four types of syllables. The interpretation of the 10-bit relative address in Operand Call and Descriptor Call depends on the setting of several processor flags. For main programs (SALF off) it is always an offset into the Program Reference Table (PRT). Character Mode" https://en.wikipedia.org/wiki/U.S.%20National%20Vegetation%20Classification,"The U.S. National Vegetation Classification (NVC or USNVC) is a scheme for classifying the natural and cultural vegetation communities of the United States. The purpose of this standardized vegetation classification system is to facilitate communication between land managers, scientists, and the public when managing, researching, and protecting plant communities. The non-profit group NatureServe maintains the NVC for the U.S. government. See also British National Vegetation Classification Vegetation classification External links The U.S. National Vegetation Classification website ""National Vegetation Classification Standard, Version 2"" FGDC-STD-005-2008, Vegetation Subcommittee, Federal Geographic Data Committee, February 2008 U.S. Geological Survey page about the Vegetation Characterization Program Federal Geographic Data Committee page about the NVC Environment of the United States Flora of the United States NatureServe Biological classification" https://en.wikipedia.org/wiki/Metatheorem,"In logic, a metatheorem is a statement about a formal system proven in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory but not the object theory. A formal system is determined by a formal language and a deductive system (axioms and rules of inference). The formal system can be used to prove particular sentences of the formal language with that system. Metatheorems, however, are proved externally to the system in question, in its metatheory. Common metatheories used in logic are set theory (especially in model theory) and primitive recursive arithmetic (especially in proof theory). Rather than demonstrating particular sentences to be provable, metatheorems may show that each of a broad class of sentences can be proved, or show that certain sentences cannot be proved. Examples Examples of metatheorems include: The deduction theorem for first-order logic says that a sentence of the form φ→ψ is provable from a set of axioms A if and only if the sentence ψ is provable from the system whose axioms consist of φ and all the axioms of A. The class existence theorem of von Neumann–Bernays–Gödel set theory states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. Consistency proofs of systems such as Peano arithmetic. See also Metamathematics Use–mention distinction" https://en.wikipedia.org/wiki/Modulo%20%28mathematics%29,"In mathematics, the term modulo (""with respect to a modulus of"", the Latin ablative of modulus which itself means ""a small measure"") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating ""modulo"" with ""except for""). For the most part, the term often occurs in statements of the form: A is the same as B modulo C which is often equivalent to ""A is the same as B up to C"", and means A and B are the same—except for differences accounted for or explained by C. History Modulo is a mathematical jargon that was introduced into mathematics in the book Disquisitiones Arithmeticae by Carl Friedrich Gauss in 1801. Given the integers a, b and n, the expression ""a ≡ b (mod n)"", pronounced ""a is congruent to b modulo n"", means that a − b is an integer multiple of n, or equivalently, a and b both share the same remainder when divided by n. It is the Latin ablative of modulus, which itself means ""a small measure."" The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation R, where a is equivalent (or congruent) to b modulo R if aRb. More informally, the term is found in statements of the form: A is the same as B modulo C which means A and B are the same—except for differences accounted for or explained by C. Usage Original use Gauss originally intended to use ""modulo"" as follows: given the integers a, b and n, the expression a ≡ b (mod n) (pronounced ""a is congruent to b modulo n"") means that a − b is an integer multiple of n, or equivalently, a and b both leave the same remainder when divided by n. For example: 13 is congruent to 63 modulo 10 means that 13 − 63 is a" https://en.wikipedia.org/wiki/Mathematica%3A%20A%20World%20of%20Numbers...%20and%20Beyond,"Mathematica: A World of Numbers... and Beyond is a kinetic and static exhibition of mathematical concepts designed by Charles and Ray Eames, originally debuted at the California Museum of Science and Industry in 1961. Duplicates have since been made, and they (as well as the original) have been moved to other institutions. History In March, 1961 a new science wing at the California Museum of Science and Industry in Los Angeles opened. The IBM Corporation had been asked by the Museum to make a contribution; IBM in turn asked the famous California designer team of Charles Eames and his wife Ray Eames to come up with a good proposal. The result was that the Eames Office was commissioned by IBM to design an interactive exhibition called Mathematica: A World of Numbers... and Beyond. This was the first of many exhibitions designed by the Eames Office. The exhibition stayed at the Museum until January 1998, making it the longest running of any corporate sponsored museum exhibition. Furthermore, it is the only one of the dozens of exhibitions designed by the Office of Charles and Ray Eames that is still extant. This original Mathematica exhibition was reassembled for display at the Alyce de Roulet Williamson Gallery at Art Center College of Design in Pasadena, California, July 30 through October 1, 2000. It is now owned by and on display at the New York Hall of Science, though it currently lacks the overhead plaques with quotations from mathematicians that were part of the original installation. Duplicates In November, 1961 an exact duplicate was made for Chicago's Museum of Science and Industry, where it was shown until late 1980. From there it was sold and relocated to the Museum of Science in Boston, Massachusetts, where it is permanently on display. The Boston installation bears the closest resemblance to the original Eames design, including numerous overhead plaques featuring historic quotations from famous mathematicians. As part of a refurbishment, a graphic p" https://en.wikipedia.org/wiki/Hann%20function,"The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing. The function, with length and amplitude is given by:   For digital signal processing, the function is sampled symmetrically (with spacing and amplitude ): which is a sequence of samples, and can be even or odd. (see ) It is also known as the raised cosine window, Hann filter, von Hann window, etc. Fourier transform The Fourier transform of is given by: Discrete transforms The Discrete-time Fourier transform (DTFT) of the length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation: The truncated sequence is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression: An N-length DFT of the window function samples the DTFT at frequencies for integer values of From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution. Name The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. The confusion arose from the similar Hamming function, named after Richard Hamming. See also Window function Apod" https://en.wikipedia.org/wiki/Great%20Elephant%20Census,"The Great Elephant Census—the largest wildlife survey in history—was an African-wide census designed to provide accurate data about the number and distribution of African elephants by using standardized aerial surveys of hundreds of thousands of square miles or terrain in Africa. The census was completed and published in the online journal PeerJ on 31 August 2016 at a cost of US$7 million. History Scientists believe that there were as many as 20 million African elephants two centuries ago. By 1979, only 600,000 elephants remained on the continent. A pan-African elephant census has not been conducted since the 1970s. The idea of a modern census was devised by Elephants Without Borders and supported, both financially and logistically, by Paul G. Allen. It was also supported by other organizations and individuals, including African Parks, Frankfurt Zoological Society, Wildlife Conservation Society, The Nature Conservancy, IUCN African Elephant Specialist Group, Howard Frederick, Mike Norton-Griffith, Kevin Dunham, Chris Touless, and Curtice Griffin with the report released in September 2016. Mike Chase, the founder of Elephants Without Borders, was the lead scientist of the census. Chase lead a group of 90 scientists and 286 crew in 18 African countries for over two years to collect the data. During this time the team flew a distance of over , equivalent to flying to the moon and a quarter of the way back, in over 10,000 hours of collecting data. The area covered represents 93% of the elephants known range. Forest Elephants which live in central and western Africa were excluded from the survey. Report The final report was released on 31 August 2016 in Honolulu at the IUCN World Conservation Congress. Data collected showed a 30 percent decline in the population of African savanna elephant in 15 of the 18 countries surveyed. The reduction occurred between 2007 and 2014, representing a loss of approximately 144,000 elephants. The total population of Africa's savan" https://en.wikipedia.org/wiki/Coherence%20%28signal%20processing%29,"In signal processing, the coherence is a statistic that can be used to examine the relation between two signals or data sets. It is commonly used to estimate the power transfer between input and output of a linear system. If the signals are ergodic, and the system function is linear, it can be used to estimate the causality between the input and output. Definition and formulation The coherence (sometimes called magnitude-squared coherence) between two signals x(t) and y(t) is a real-valued function that is defined as: where Gxy(f) is the Cross-spectral density between x and y, and Gxx(f) and Gyy(f) the auto spectral density of x and y respectively. The magnitude of the spectral density is denoted as |G|. Given the restrictions noted above (ergodicity, linearity) the coherence function estimates the extent to which y(t) may be predicted from x(t) by an optimum linear least squares function. Values of coherence will always satisfy . For an ideal constant parameter linear system with a single input x(t) and single output y(t), the coherence will be equal to one. To see this, consider a linear system with an impulse response h(t) defined as: , where denotes convolution. In the Fourier domain this equation becomes , where Y(f) is the Fourier transform of y(t) and H(f) is the linear system transfer function. Since, for an ideal linear system: and , and since is real, the following identity holds, . However, in the physical world an ideal linear system is rarely realized, noise is an inherent component of system measurement, and it is likely that a single input, single output linear system is insufficient to capture the complete system dynamics. In cases where the ideal linear system assumptions are insufficient, the Cauchy–Schwarz inequality guarantees a value of . If Cxy is less than one but greater than zero it is an indication that either: noise is entering the measurements, that the assumed function relating x(t) and y(t) is not linear, or that y(t) is produ" https://en.wikipedia.org/wiki/Food%20safety,"Food safety (or food hygiene) is used as a scientific method/discipline describing handling, preparation, and storage of food in ways that prevent foodborne illness. The occurrence of two or more cases of a similar illness resulting from the ingestion of a common food is known as a food-borne disease outbreak. This includes a number of routines that should be followed to avoid potential health hazards. In this way, food safety often overlaps with food defense to prevent harm to consumers. The tracks within this line of thought are safety between industry and the market and then between the market and the consumer. In considering industry-to-market practices, food safety considerations include the origins of food including the practices relating to food labeling, food hygiene, food additives and pesticide residues, as well as policies on biotechnology and food and guidelines for the management of governmental import and export inspection and certification systems for foods. In considering market-to-consumer practices, the usual thought is that food ought to be safe in the market and the concern is safe delivery and preparation of the food for the consumer. Food safety, nutrition and food security are closely related. Unhealthy food creates a cycle of disease and malnutrition that affects infants and adults as well. Food can transmit pathogens, which can result in the illness or death of the person or other animals. The main types of pathogens are bacteria, viruses, parasites, and fungus. The WHO Foodborne Disease Epidemiology Reference Group conducted the only study that solely and comprehensively focused on the global health burden of foodborne diseases. This study, which involved the work of over 60 experts for a decade, is the most comprehensive guide to the health burden of foodborne diseases. The first part of the study revealed that 31 foodborne hazards considered priority accounted for roughly 420,000 deaths in LMIC and posed a burden of about 33 million disa" https://en.wikipedia.org/wiki/Rolanet,"Rolanet (Robotron Local Area Network) was a networking standard, developed in the former German Democratic Republic (GDR) and introduced in 1987 by the computer manufacturer Robotron. It enabled computer networking over coax cable and glass fiber with a range of . Networking speed was 500 kBd, comparable to other standards of the day. A maximum of 253 computers could be connected using Rolanet. Two variants of Rolanet existed: Rolanet 1, introduced in 1987, saw limited deployment; Rolanet 2 was planned as a successor to Rolanet 1, but presumably never got beyond the prototype stage. A scaled-down version of Rolanet, BICNet, was used for educational purposes. It is no longer possible to assemble a functioning Rolanet system today, due to lack of software and working hardware. External links More information about Robotron networking technologies on Robotrontechnik.de Computer networking Science and technology in East Germany" https://en.wikipedia.org/wiki/Total%20variation%20denoising,"In signal processing, particularly image processing, total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the image gradient magnitude is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model. This noise removal technique has advantages over simple techniques such as linear smoothing or median filtering which reduce noise but at the same time smooth away edges to a greater or lesser degree. By contrast, total variation denoising is a remarkably effective edge-preserving filter, i.e., simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios. 1D signal series For a digital signal , we can, for example, define the total variation as Given an input signal , the goal of total variation denoising is to find an approximation, call it , that has smaller total variation than but is ""close"" to . One measure of closeness is the sum of square errors: So the total-variation denoising problem amounts to minimizing the following discrete functional over the signal : By differentiating this functional with respect to , we can derive a corresponding Euler–Lagrange equation, that can be numerically integrated with the original signal as initial condition. This was the original approach. Alternatively, since this is a convex functional, techniques from convex optimization can be used to minimize it and find the solution . Regularization properties The regularization parameter plays a critical role in the denoising process. Wh" https://en.wikipedia.org/wiki/Vkernel,"A virtual kernel architecture (vkernel) is an operating system virtualisation paradigm where kernel code can be compiled to run in the user space, for example, to ease debugging of various kernel-level components, in addition to general-purpose virtualisation and compartmentalisation of system resources. It is used by DragonFly BSD in its vkernel implementation since DragonFly 1.7, having been first revealed in , and first released in the stable branch with DragonFly 1.8 in . The long-term goal, in addition to easing kernel development, is to make it easier to support internet-connected computer clusters without compromising local security. Similar concepts exist in other operating systems as well; in Linux, a similar virtualisation concept is known as user-mode Linux; whereas in NetBSD since the summer of 2007, it has been the initial focus of the rump kernel infrastructure. The virtual kernel concept is nearly the exact opposite of the unikernel concept — with vkernel, kernel components get to run in userspace to ease kernel development and debugging, supported by a regular operating system kernel; whereas with a unikernel, userspace-level components get to run directly in kernel space for extra performance, supported by baremetal hardware or a hardware virtualisation stack. However, both vkernels and unikernels can be used for similar tasks as well, for example, to self-contain software to a virtualised environment with low overhead. In fact, NetBSD's rump kernel, originally having a focus of running kernel components in userspace, has since shifted into the unikernel space as well (going after the anykernel moniker for supporting both paradigms). The vkernel concept is different from a FreeBSD jail in that a jail is only meant for resource isolation, and cannot be used to develop and test new kernel functionality in the userland, because each jail is sharing the same kernel. (DragonFly, however, still has FreeBSD jail support as well.) In DragonFly, the v" https://en.wikipedia.org/wiki/Beraha%20constants,"The Beraha constants are a series of mathematical constants by which the Beraha constant is given by Notable examples of Beraha constants include is , where is the golden ratio, is the silver constant (also known as the silver root), and . The following table summarizes the first ten Beraha constants. See also Chromatic polynomial Notes" https://en.wikipedia.org/wiki/Gardner%E2%80%93Salinas%20braille%20codes,"The Gardner–Salinas braille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6. The codes were developed as a replacement for Nemeth Braille by John A. Gardner, a physicist at Oregon State University, and Norberto Salinas, an Argentinian mathematician. The Gardner–Salinas braille codes are an example of a compact human-readable markup language. The syntax is based on the LaTeX system for scientific typesetting. Table of Gardner–Salinas 8-dot (GS8) braille The set of lower-case letters, the period, comma, semicolon, colon, exclamation mark, apostrophe, and opening and closing double quotes are the same as in Grade-2 English Braille. Digits Apart from 0, this is the same as the Antoine notation used in French and Luxembourgish Braille. Upper-case letters GS8 upper-case letters are indicated by the same cell as standard English braille (and GS8) lower-case letters, with dot #7 added. Compare Luxembourgish Braille. Greek letters Dot 8 is added to the letter forms of International Greek Braille to derive Greek letters: Characters differing from English Braille ASCII symbols and mathematical operators Text symbols Math and science symbols Markup * Encodes the fraction-slash for the single adjacent digits/letters as numerator and denominator. * Used for any > 1 digit radicand. ** Used for markup to represent inkprint text. Typeface indicators Shape symbols Set theory" https://en.wikipedia.org/wiki/Hostname,"In computer networking, a hostname (archaically nodename) is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication, such as the World Wide Web. Hostnames may be simple names consisting of a single word or phrase, or they may be structured. Each hostname usually has at least one numeric network address associated with it for routing packets for performance and other reasons. Internet hostnames may have appended the name of a Domain Name System (DNS) domain, separated from the host-specific label by a period (""dot""). In the latter form, a hostname is also called a domain name. If the domain name is completely specified, including a top-level domain of the Internet, then the hostname is said to be a fully qualified domain name (FQDN). Hostnames that include DNS domains are often stored in the Domain Name System together with the IP addresses of the host they represent for the purpose of mapping the hostname to an address, or the reverse process. Internet hostnames In the Internet, a hostname is a domain name assigned to a host computer. This is usually a combination of the host's local name with its parent domain's name. For example, en.wikipedia.org consists of a local hostname (en) and the domain name wikipedia.org. This kind of hostname is translated into an IP address via the local hosts file, or the Domain Name System (DNS) resolver. It is possible for a single host computer to have several hostnames; but generally the operating system of the host prefers to have one hostname that the host uses for itself. Any domain name can also be a hostname, as long as the restrictions mentioned below are followed. So, for example, both en.wikipedia.org and wikipedia.org are hostnames because they both have IP addresses assigned to them. A hostname may be a domain name, if it is properly organized into the domain name system. A domain name may be a hostname if it has been a" https://en.wikipedia.org/wiki/Recognition%20signal,"A recognition signal is a signal whereby a person, a ship, an airplane or something else is recognized. They can be used during war or can be used to help the police recognize each other during undercover operations. It can also be used in biology to signal that a molecule or chemical is to be bound to another molecule. War These signals are often used to recognize friends and enemies in a war. For military use these signals often use colored lights or the International marine signal flags. Police Other uses of the signal include the police who sometimes use a recognition signal so that officers in uniform can recognize officers in normal clothing (undercover). The NYPD often use headbands, wristbands or colored clothing as recognition signals which are known as the ""color of the day"". Biology A recognition signal is also a chemical signal used in biology to signal the end of a section of DNA or RNA during gene duplication in cells. See also Communication International Code of Signals Notes External links Signalman manual Communication Biological techniques and tools Military communications" https://en.wikipedia.org/wiki/Mathematics%20of%20apportionment,"Mathematics of apportionment describes mathematical principles and algorithms for fair allocation of identical items among parties with different entitlements. Such principles are used to apportion seats in parliaments among federal states or political parties. See apportionment (politics) for the more concrete principles and issues related to apportionment, and apportionment by country for practical methods used around the world. Mathematically, an apportionment method is just a method of rounding fractions to integers. As simple as it may sound, each and every method for rounding suffers from one or more paradoxes. The mathematical theory of apportionment aims to decide what paradoxes can be avoided, or in other words, what properties can be expected from an apportionment method. The mathematical theory of apportionment was studied as early as 1907 by the mathematician Agner Krarup Erlang. It was later developed to a great detail by the mathematician Michel Balinsky and the economist Peyton Young. Besides its application to political parties, it is also applicable to fair item allocation when agents have different entitlements. It is also relevant in manpower planning - where jobs should be allocated in proportion to characteristics of the labor pool, to statistics - where the reported rounded numbers of percentages should sum up to 100%, and to bankruptcy problems. Definitions Input The inputs to an apportionment method are: A positive integer representing the total number of items to allocate. It is also called the house size, since in many cases, the items to allocate are seats in a house of representatives. A positive integer representing the number of agents to which items should be allocated. For example, these can be federal states or political parties. A vector of numbers representing entitlements - represents the entitlement of agent , that is, the amount of items to which is entitled (out of the total of ). These entitlements are often norma" https://en.wikipedia.org/wiki/Constant%20amplitude%20zero%20autocorrelation%20waveform,"In signal processing, a Constant Amplitude Zero AutoCorrelation waveform (CAZAC) is a periodic complex-valued signal with modulus one and out-of-phase periodic (cyclic) autocorrelations equal to zero. CAZAC sequences find application in wireless communication systems, for example in 3GPP Long Term Evolution for synchronization of mobile phones with base stations. Zadoff–Chu sequences are well-known CAZAC sequences with special properties. Example CAZAC Sequence For a CAZAC sequence of length where is relatively prime to the th symbol is given by: Even N Odd N Power Spectrum of CAZAC Sequence The power spectrum of a CAZAC sequence is flat. If we have a CAZAC sequence the time domain autocorrelation is an impulse The discrete fourier transform of the autocorrelation is flat Power spectrum is related to autocorrelation by As a result the power spectrum is also flat." https://en.wikipedia.org/wiki/Cambrian%20explosion,"The Cambrian explosion, Cambrian radiation, Cambrian diversification, or the Biological Big Bang refers to an interval of time approximately in the Cambrian Period of early Paleozoic when there was a sudden radiation of complex life and practically all major animal phyla started appearing in the fossil record. It lasted for about 13 – 25 million years and resulted in the divergence of most modern metazoan phyla. The event was accompanied by major diversification in other groups of organisms as well. Before early Cambrian diversification, most organisms were relatively simple, composed of individual cells, or small multicellular organisms, occasionally organized into colonies. As the rate of diversification subsequently accelerated, the variety of life became much more complex, and began to resemble that of today. Almost all present-day animal phyla appeared during this period, including the earliest chordates. A 2019 paper suggests that the timing should be expanded back to include the late Ediacaran, where another diverse soft-bodied biota existed and possibly persisted into the Cambrian, rather than just the narrower timeframe of the ""Cambrian Explosion"" event visible in the fossil record, based on analysis of chemicals that would have laid the building blocks for a progression of transitional radiations starting with the Ediacaran period and continuing at a similar rate into the Cambrian. History and significance The seemingly rapid appearance of fossils in the ""Primordial Strata"" was noted by William Buckland in the 1840s, and in his 1859 book On the Origin of Species, Charles Darwin discussed the then-inexplicable lack of earlier fossils as one of the main difficulties for his theory of descent with slow modification through natural selection. The long-running puzzlement about the seemingly-sudden appearance of the Cambrian fauna without evident precursor(s) centers on three key points: whether there really was a mass diversification of complex organisms " https://en.wikipedia.org/wiki/Wafer-scale%20integration,"Wafer-scale integration (WSI) is a rarely used system of building very-large integrated circuit (commonly called a ""chip"") networks from an entire silicon wafer to produce a single ""super-chip"". Combining large size and reduced packaging, WSI was expected to lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term very-large-scale integration, the state of the art when WSI was being developed. Overview In the normal integrated circuit manufacturing process, a single large cylindrical crystal (boule) of silicon is produced and then cut into disks known as wafers. The wafers are then cleaned and polished in preparation for the fabrication process. A photographic process is used to pattern the surface where material ought to be deposited on top of the wafer and where not to. The desired material is deposited and the photographic mask is removed for the next layer. From then on the wafer is repeatedly processed in this fashion, putting on layer after layer of circuitry on the surface. Multiple copies of these patterns are deposited on the wafer in a grid fashion across the surface of the wafer. After all the possible locations are patterned, the wafer surface appears like a sheet of graph paper, with grid lines delineating the individual chips. Each of these grid locations is tested for manufacturing defects by automated equipment. Those locations that are found to be defective are recorded and marked with a dot of paint (this process is referred to as ""inking a die"" and more modern wafer fabrication techniques no longer require physical markings to identify defective die). The wafer is then sawed apart to cut out the individual chips. Those defective chips are thrown away, or recycled, while the working chips are placed into packaging and re-tested for any damage that might occur during the packaging process. Flaws on the surface of the wafers and problems during the layering/depositing process a" https://en.wikipedia.org/wiki/Joan%20Mott%20Prize%20Lecture,"The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture" https://en.wikipedia.org/wiki/Kernel-phase,"Kernel-phases are observable quantities used in high resolution astronomical imaging used for superresolution image creation. It can be seen as a generalization of closure phases for redundant arrays. For this reason, when the wavefront quality requirement are met, it is an alternative to aperture masking interferometry that can be executed without a mask while retaining phase error rejection properties. The observables are computed through linear algebra from the Fourier transform of direct images. They can then be used for statistical testing, model fitting, or image reconstruction. Prerequisites In order to extract kernel-phases from an image, some requirements must be met: Images are nyquist-sampled (at least 2 pixels per resolution element ()) Images are taken in near monochromatic light Exposure time is shorter than the timescale of aberrations Strehl ratio is high (good adaptive optics) Linearity of the pixel response (i.e. no saturation) Deviations from these requirements are known to be acceptable, but lead to observational bias that should be corrected by the observation of calibrators. Definition The method relies on a discrete model of the instrument's pupil plane and the corresponding list of baselines to provide corresponding vectors of pupil plane errors and of image plane Fourier Phases. When the wavefront error in the pupil plane is small enough (i.e. when the Strehl ratio of the imaging system is sufficiently high), the complex amplitude associated to the instrumental phase in one point of the pupil , can be approximated by . This permits the expression of the pupil-plane phase aberrations to the image plane Fourier phase as a linear transformation described by the matrix : Where is the theoretical Fourier phase vector of the object. In this formalism, singular value decomposition can be used to find a matrix satisfying . The rows of constitute a basis of the kernel of . The vector is called the kernel-phase vector of observab" https://en.wikipedia.org/wiki/MPLAB,"MPLAB is a proprietary freeware integrated development environment for the development of embedded applications on PIC and dsPIC microcontrollers, and is developed by Microchip Technology. MPLAB X is the latest edition of MPLAB, and is developed on the NetBeans platform. MPLAB and MPLAB X support project management, code editing, debugging and programming of Microchip 8-bit PIC and AVR (including ATMEGA) microcontrollers, 16-bit PIC24 and dsPIC microcontrollers, as well as 32-bit SAM (ARM) and PIC32 (MIPS) microcontrollers. MPLAB is designed to work with MPLAB-certified devices such as the MPLAB ICD 3 and MPLAB REAL ICE, for programming and debugging PIC microcontrollers using a personal computer. PICKit programmers are also supported by MPLAB. MPLAB X supports automatic code generation with the MPLAB Code Configurator and the MPLAB Harmony Configurator plugins. MPLAB X MPLAB X is the latest version of the MPLAB IDE built by Microchip Technology, and is based on the open-source NetBeans platform. MPLAB X supports editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB X is the first version of the IDE to include cross-platform support for macOS and Linux operating systems, in addition to Microsoft Windows. MPLAB X supports the following compilers: MPLAB XC8 — C compiler for 8-bit PIC and AVR devices MPLAB XC16 — C compiler for 16-bit PIC devices MPLAB XC32 — C/C++ compiler for 32-bit MIPS-based PIC32 and ARM-based SAM devices HI-TECH C — C compiler for 8-bit PIC devices (discontinued) SDCC — open-source C compiler MPLAB 8.x MPLAB 8.x is the last version of the legacy MPLAB IDE technology, custom built by Microchip Technology in Microsoft Visual C++. MPLAB supports project management, editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB only works on Microsoft Windows. MPLAB is still available from Microchip's archives, but is not recommended for new projects. MP" https://en.wikipedia.org/wiki/List%20of%20operator%20splitting%20topics,"This is a list of operator splitting topics. General Alternating direction implicit method — finite difference method for parabolic, hyperbolic, and elliptic partial differential equations GRADELA — simple gradient elasticity model Matrix splitting — general method of splitting a matrix operator into a sum or difference of matrices Paul Tseng — resolved question on convergence of matrix splitting algorithms PISO algorithm — pressure-velocity calculation for Navier-Stokes equations Projection method (fluid dynamics) — computational fluid dynamics method Reactive transport modeling in porous media — modeling of chemical reactions and fluid flow through the Earth's crust Richard S. Varga — developed matrix splitting Strang splitting — specific numerical method for solving differential equations using operator splitting Numerical analysis Mathematics-related lists Outlines of mathematics and logic Outlines" https://en.wikipedia.org/wiki/Conventionally%20grown,"Conventionally grown is an agriculture term referring to a method of growing edible plants (such as fruit and vegetables) and other products. It is opposite to organic growing methods which attempt to produce without synthetic chemicals (fertilizers, pesticides, antibiotics, hormones) or genetically modified organisms. Conventionally grown products, meanwhile, often use fertilizers and pesticides which allow for higher yield, out of season growth, greater resistance, greater longevity and a generally greater mass. Conventionally grown fruit: PLU code consists of 4 numbers (e.g. 4012). Organically grown fruit: PLU code consists of 5 numbers and begins with 9 (e.g. 94012) Genetically engineered fruit: PLU code consists of 5 numbers and begins with 8 (e.g. 84012). Food science" https://en.wikipedia.org/wiki/List%20of%20polyhedral%20stellations,"In the geometry of three dimensions, a stellation extends a polyhedron to form a new figure that is also a polyhedron. The following is a list of stellations of various polyhedra. See also List of Wenninger polyhedron models The Fifty-Nine Icosahedra Footnotes" https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot,"In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as: Stability Causal system / anticausal system Region of convergence (ROC) Minimum phase / non minimum phase A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system: Continuous-time systems use the Laplace transform and are plotted in the s-plane: Real frequency components are along its vertical axis (the imaginary line where ) Discrete-time systems use the Z-transform and are plotted in the z-plane: Real frequency components are along its unit circle Continuous-time systems In general, a rational transfer function for a continuous-time LTI system has the form: where and are polynomials in , is the order of the numerator polynomial, is the coefficient of the numerator polynomial, is the order of the denominator polynomial, and is the coefficient of the denominator polynomial. Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies. Poles and zeros the zeros of the system are roots of the numerator polynomial: such that the poles of the system are roots of the denominator polynomial: such that Region of convergence The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC " https://en.wikipedia.org/wiki/List%20of%20order%20theory%20topics,"Order theory is a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is ""less than"" or ""precedes"" another. An alphabetical list of many notions of order theory can be found in the order theory glossary. See also inequality, extreme value and mathematical optimization. Overview Partially ordered set Preorder Totally ordered set Total preorder Chain Trichotomy Extended real number line Antichain Strict order Hasse diagram Directed acyclic graph Duality (order theory) Product order Distinguished elements of partial orders Greatest element (maximum, top, unit), Least element (minimum, bottom, zero) Maximal element, minimal element Upper bound Least upper bound (supremum, join) Greatest lower bound (infimum, meet) Limit superior and limit inferior Irreducible element Prime element Compact element Subsets of partial orders Cofinal and coinitial set, sometimes also called dense Meet-dense set and join-dense set Linked set (upwards and downwards) Directed set (upwards and downwards) centered and σ-centered set Net (mathematics) Upper set and lower set Ideal and filter Ultrafilter Special types of partial orders Completeness (order theory) Dense order Distributivity (order theory) modular lattice distributive lattice completely distributive lattice Ascending chain condition Infinite descending chain Countable chain condition, often abbreviated as ccc Knaster's condition, sometimes denoted property (K) Well-orders Well-founded relation Ordinal number Well-quasi-ordering Completeness properties Semilattice Lattice (Directed) complete partial order, (d)cpo Bounded complete Complete lattice Knaster–Tarski theorem Infinite divisibility Orders with further algebraic operations Heyting algebra Relatively complemented lattice Complete Heyting algebra Pointless topology MV-algebra Ockham algebras: Stone algebra De Morgan algebra Kleene alg" https://en.wikipedia.org/wiki/SEMAT,"SEMAT (Software Engineering Method and Theory) is an initiative to reshape software engineering such that software engineering qualifies as a rigorous discipline. The initiative was launched in December 2009 by Ivar Jacobson, Bertrand Meyer, and Richard Soley with a call for action statement and a vision statement. The initiative was envisioned as a multi-year effort for bridging the gap between the developer community and the academic community and for creating a community giving value to the whole software community. The work is now structured in four different but strongly related areas: Practice, Education, Theory, and Community. The Practice area primarily addresses practices. The Education area is concerned with all issues related to training for both the developers and the academics including students. The Theory area is primarily addressing the search for a General Theory in Software Engineering. Finally, the Community area works with setting up legal entities, creating websites and community growth. It was expected that the Practice area, the Education area and the Theory area would at some point in time integrate in a way of value to all of them: the Practice area would be a ""customer"" of the Theory area, and direct the research to useful results for the developer community. The Theory area would give a solid and practical platform for the Practice area. And, the Education area would communicate the results in proper ways. Practice area The first step was here to develop a common ground or a kernel including the essence of software engineering – things we always have, always do, always produce when developing software. The second step was envisioned to add value on top of this kernel in the form of a library of practices to be composed to become specific methods, specific for all kinds of reasons such as the preferences of the team using it, kind of software being built, etc. The first step is as of this writing just about to be concluded. The res" https://en.wikipedia.org/wiki/Bridging%20fault,"In electronic engineering, a bridging fault consists of two signals that are connected when they should not be. Depending on the logic circuitry employed, this may result in a wired-OR or wired-AND logic function. Since there are O(n^2) potential bridging faults, they are normally restricted to signals that are physically adjacent in the design. Modeling bridge fault Bridging to VDD or Vss is equivalent to stuck at fault model. Traditionally bridged signals were modeled with logic AND or OR of signals. If one driver dominates the other driver in a bridging situation, the dominant driver forces the logic to the other one, in such case a dominant bridging fault is used. To better reflect the reality of CMOS VLSI devices, a dominant AND or dominant OR bridging fault model is used where dominant driver keeps its value, while the other signal value is the result of AND (or OR) of its own value with the dominant driver." https://en.wikipedia.org/wiki/Catalan%27s%20constant,"In mathematics, Catalan's constant , is defined by where is the Dirichlet beta function. Its numerical value is approximately It is not known whether is irrational, let alone transcendental. has been called ""arguably the most basic constant whose irrationality and transcendence (though strongly suspected) remain unproven"". Catalan's constant was named after Eugène Charles Catalan, who found quickly-converging series for its calculation and published a memoir on it in 1865. Uses In low-dimensional topology, Catalan's constant is 1/4 of the volume of an ideal hyperbolic octahedron, and therefore 1/4 of the hyperbolic volume of the complement of the Whitehead link. It is 1/8 of the volume of the complement of the Borromean rings. In combinatorics and statistical mechanics, it arises in connection with counting domino tilings, spanning trees, and Hamiltonian cycles of grid graphs. In number theory, Catalan's constant appears in a conjectured formula for the asymptotic number of primes of the form according to Hardy and Littlewood's Conjecture F. However, it is an unsolved problem (one of Landau's problems) whether there are even infinitely many primes of this form. Catalan's constant also appears in the calculation of the mass distribution of spiral galaxies. Known digits The number of known digits of Catalan's constant has increased dramatically during the last decades. This is due both to the increase of performance of computers as well as to algorithmic improvements. Integral identities As Seán Stewart writes, ""There is a rich and seemingly endless source of definite integrals that can be equated to or expressed in terms of Catalan's constant."" Some of these expressions include: where the last three formulas are related to Malmsten's integrals. If is the complete elliptic integral of the first kind, as a function of the elliptic modulus , then If is the complete elliptic integral of the second kind, as a function of the elliptic modulus , th" https://en.wikipedia.org/wiki/List%20of%20cohomology%20theories,"This is a list of some of the ordinary and generalized (or extraordinary) homology and cohomology theories in algebraic topology that are defined on the categories of CW complexes or spectra. For other sorts of homology theories see the links at the end of this article. Notation S = π = S0 is the sphere spectrum. Sn is the spectrum of the n-dimensional sphere SnY = Sn∧Y is the nth suspension of a spectrum Y. [X,Y] is the abelian group of morphisms from the spectrum X to the spectrum Y, given (roughly) as homotopy classes of maps. [X,Y]n = [SnX,Y] [X,Y]* is the graded abelian group given as the sum of the groups [X,Y]n. πn(X) = [Sn, X] = [S, X]n is the nth stable homotopy group of X. π*(X) is the sum of the groups πn(X), and is called the coefficient ring of X when X is a ring spectrum. X∧Y is the smash product of two spectra. If X is a spectrum, then it defines generalized homology and cohomology theories on the category of spectra as follows. Xn(Y) = [S, X∧Y]n = [Sn, X∧Y] is the generalized homology of Y, Xn(Y) = [Y, X]−n = [S−nY, X] is the generalized cohomology of Y Ordinary homology theories These are the theories satisfying the ""dimension axiom"" of the Eilenberg–Steenrod axioms that the homology of a point vanishes in dimension other than 0. They are determined by an abelian coefficient group G, and denoted by H(X, G) (where G is sometimes omitted, especially if it is Z). Usually G is the integers, the rationals, the reals, the complex numbers, or the integers mod a prime p. The cohomology functors of ordinary cohomology theories are represented by Eilenberg–MacLane spaces. On simplicial complexes, these theories coincide with singular homology and cohomology. Homology and cohomology with integer coefficients. Spectrum: H (Eilenberg–MacLane spectrum of the integers.) Coefficient ring: πn(H) = Z if n = 0, 0 otherwise. The original homology theory. Homology and cohomology with rational (or real or complex) coefficients. Spectrum: HQ (Eilenberg–Mac " https://en.wikipedia.org/wiki/Arithmetic%20and%20geometric%20Frobenius,"In mathematics, the Frobenius endomorphism is defined in any commutative ring R that has characteristic p, where p is a prime number. Namely, the mapping φ that takes r in R to rp is a ring endomorphism of R. The image of φ is then Rp, the subring of R consisting of p-th powers. In some important cases, for example finite fields, φ is surjective. Otherwise φ is an endomorphism but not a ring automorphism. The terminology of geometric Frobenius arises by applying the spectrum of a ring construction to φ. This gives a mapping φ*: Spec(Rp) → Spec(R) of affine schemes. Even in cases where Rp = R this is not the identity, unless R is the prime field. Mappings created by fibre product with φ*, i.e. base changes, tend in scheme theory to be called geometric Frobenius. The reason for a careful terminology is that the Frobenius automorphism in Galois groups, or defined by transport of structure, is often the inverse mapping of the geometric Frobenius. As in the case of a cyclic group in which a generator is also the inverse of a generator, there are in many situations two possible definitions of Frobenius, and without a consistent convention some problem of a minus sign may appear." https://en.wikipedia.org/wiki/Gating%20signal,"Signal gating is a concept commonly used in the field of electronics and signal processing. It refers to the process of controlling the flow of signals based on certain conditions or criteria. The goal of signal gating is to selectively allow or block the transmission of signals through a circuit or system. In signal gating, a gating signal is used to modulate the passage of the main signal. The gating signal acts as a control mechanism, determining when the main signal can pass through the gate and when it is blocked. The gating signal can be generated by various means, such as an external trigger, a specific voltage level, or a specific frequency range. Signal gating is often employed in applications where precise control over the transmission of signals is required. Here are a few examples of how signal gating is used in different fields: 1. Telecommunications: In telecommunications systems, signal gating is used to regulate the flow of data packets. By opening and closing the gate based on specific criteria, such as error detection or network congestion, signal gating helps ensure that the data is transmitted efficiently and reliably. 2. Audio processing: In audio applications, signal gating is used to reduce background noise or eliminate unwanted sounds. For example, in live sound reinforcement, a noise gate is often employed to mute or attenuate the microphone signal when the sound level falls below a certain threshold. This helps minimize the pickup of ambient noise and unwanted signals. 3. Radar systems: Signal gating plays a crucial role in radar systems, particularly in pulse-Doppler radar. Gating is used to control the transmission and reception of radar pulses, allowing the system to focus on specific ranges or angles of interest while ignoring other signals. This helps improve target detection and reduces interference from unwanted reflections. 4. Medical imaging: Signal gating is utilized in medical imaging techniques like computed tomography (CT" https://en.wikipedia.org/wiki/Retrogradation%20%28starch%29,"Retrogradation is a reaction that takes place when the amylose and amylopectin chains in cooked, gelatinized starch realign themselves as the cooked starch cools. When native starch is heated and dissolved in water, the crystalline structure of amylose and amylopectin molecules is lost and they hydrate to form a viscous solution. If the viscous solution is cooled or left at lower temperature for a long enough period, the linear molecules, amylose, and linear parts of amylopectin molecules retrograde and rearrange themselves again to a more crystalline structure. The linear chains place themselves parallel and form hydrogen bridges. In viscous solutions the viscosity increases to form a gel. At temperatures between and , the aging process is enhanced drastically. Amylose crystallization occurs much faster than crystallization of the amylopectin. The crystal melting temperature of amylose is much higher (about ) than amylopectin (about ). The temperature range between cooking starch and storing in room temperature is optimum for amylose crystallization, and therefore amylose crystallization is responsible for the development of initial hardness of the starch gel. On the other hand, amylopectin has a narrower temperature range for crystallization as crystallization does not occur at a temperature higher than its melting temperature. Therefore, amylopectin is responsible for development of the long-term crystallinity and gel structure. Retrogradation can expel water from the polymer network. This process is known as syneresis. A small amount of water can be seen on top of the gel. Retrogradation is directly related to the staling or aging of bread. Retrograded starch is less digestible (see resistant starch). Chemical modification of starches can reduce or enhance the retrogradation. Waxy, high amylopectin, starches also have much less of a tendency to retrogradate. Additives such as fat, glucose, sodium nitrate and emulsifier can reduce retrogradation of starch. " https://en.wikipedia.org/wiki/Diagnostic%20board,"In electronic systems a diagnostic board is a specialized device with diagnostic circuitry on a printed circuit board that connects to a computer or other electronic equipment replacing an existing module, or plugging into an expansion card slot. A multi-board electronic system such as a computer comprises multiple printed circuit boards or cards connected via connectors. When a fault occurs in the system, it is sometimes possible to isolate or identify the fault by replacing one of the boards with a diagnostic board. A diagnostic board can range from extremely simple to extremely sophisticated. Simple standard diagnostic plug-in boards for computers are available that display numeric codes to assist in identifying issues detected during the power-on self-test executed automatically during system startup. Dummy board A dummy board provides a minimal interface. This type of diagnostic board in intended to confirm that the interface is correctly implemented. For example, a PC motherboard manufacturer can test PCI functionality of a PC motherboard by connecting a dummy PCI board into each PCI slot on the motherboard Extender board An extender board (or board extender, card extender, extender card) is a simple circuit board that interposes between a card cage backplane and the circuit board of interest to physically 'extend' the circuit board of interest out from the card cage allowing access to both sides of the circuit board to connect diagnostic equipment such as an oscilloscope or systems analyzer. For example, a PCI extender board can be plugged into a PCI slot on a computer motherboard, and then a PCI card connected to the extender board to 'extend' the board into free space for access. This approach was common in the 1970s and 1980s particularly on S-100 bus systems. The concept can become unworkable when signal timing is affected by the length of the signal paths on the diagnostic board, as well as introducing Radio Frequency Interference (RFI) into the ci" https://en.wikipedia.org/wiki/System%20context%20diagram,"A system context diagram in engineering is a diagram that defines the boundary between the system, or part of a system, and its environment, showing the entities that interact with it. This diagram is a high level view of a system. It is similar to a block diagram. Overview System context diagrams show a system, as a whole and its inputs and outputs from/to external factors. According to Kossiakoff and Sweet (2011): System context diagrams are used early in a project to get agreement on the scope under investigation. Context diagrams are typically included in a requirements document. These diagrams must be read by all project stakeholders and thus should be written in plain language, so the stakeholders can understand items within the document. Building blocks Context diagrams can be developed with the use of two types of building blocks: Entities (Actors): labeled boxes; one in the center representing the system, and around it multiple boxes for each external actor Relationships: labeled lines between the entities and system For example, ""customer places order."" Context diagrams can also use many different drawing types to represent external entities. They can use ovals, stick figures, pictures, clip art or any other representation to convey meaning. Decision trees and data storage are represented in system flow diagrams. A context diagram can also list the classifications of the external entities as one of a set of simple categories (Examples:), which add clarity to the level of involvement of the entity with regards to the system. These categories include: Active: Dynamic to achieve some goal or purpose (Examples: ""Article readers"" or ""customers""). Passive: Static external entities which infrequently interact with the system (Examples: ""Article editors"" or ""database administrator""). Cooperative: Predictable external entities which are used by the system to bring about some desired outcome (Examples: ""Internet service providers"" or ""shipping companie" https://en.wikipedia.org/wiki/Dn42,"dn42 is a decentralized peer-to-peer network built using VPNs and software/hardware BGP routers. While other darknets try to establish anonymity for their participants, that is not what dn42 aims for. It is a network to explore routing technologies used in the Internet and tries to establish direct non-NAT-ed connections between the members. The network is not fully meshed. dn42 uses mostly tunnels instead of physical links between the individual networks. Each participant is connected to one or more other participants. Over the VPN or the physical links, BGP is used for inter AS routing. While OSPF is the most commonly used protocol for intra AS routing, each participant is free to choose any other IGP, like Babel, inside their AS. History The DN42 project grew out of the popular PeerIX project started by HardForum members in mid-2009. The PeerIX project, while small in initial numbers grew to over 50 active members with a backlog of 100 requests to join the network. Ultimately the project was unable to meet the demand of user scale and eventually deprecated (though many of the core member team still have their networks online.) The founding members of the DN42 project tried to unsuccessfully rekindle the PeerIX project(through the private google group) and instead formed their own IPv6 only network, successfully scaling it to the size it is today. Technical setup Address space Network address space for IPv4 consists of private subnets: 172.20.0.0/14 is the main subnet. Note that other private address ranges may also be announced in dn42, as the network is interconnected with other similar projects. Most notably, ChaosVPN uses 172.31.0.0/16 and parts of 10.0.0.0/8, Freifunk ICVPN uses 10.0.0.0/8 and NeoNetwork uses 10.127.0.0/16. For IPv6, Unique Local Address (ULA, the IPv6 equivalent of private address range) (fd00::/8) are used. Please note that other network use IPv6 addresses in this range as well, including NeoNetwork's use of fd10:127::/32. AS nu" https://en.wikipedia.org/wiki/Microlithography,"Microlithography is a general name for any manufacturing process that can create a minutely patterned thin film of protective materials over a substrate, such as a silicon wafer, in order to protect selected areas of it during subsequent etching, deposition, or implantation operations. The term is normally used for processes that can reliably produce features of microscopic size, such as 10 micrometres or less. The term nanolithography may be used to designate processes that can produce nanoscale features, such as less than 100 nanometres. Microlithography is a microfabrication process that is extensively used in the semiconductor industry and also manufacture microelectromechanical systems. Processes Specific microlithography processes include: Photolithography using light projected on a photosensitive metarial film (photoresist). Electron beam lithography, using a steerable electron beam. Nanoimprinting Interference lithography Magnetolithography Scanning probe lithography Surface-charge lithography Diffraction lithography These processes differ in speed and cost, as well as in the material they can be applied to and the range of feature sizes they can produce. For instance, while the size of features achievable with photolithography is limited by the wavelength of the light used, the technique it is considerably faster and simpler than electron beam lithography, that can achieve much smaller ones. Applications The main application for microlithography is fabrication of integrated circuits (""electronic chips""), such as solid-state memories and microprocessors. They can also be used to create diffraction gratings, microscope calibration grids, and other flat structures with microscopic details. See also Printed circuit board" https://en.wikipedia.org/wiki/Process%20gain,"In a spread-spectrum system, the process gain (or ""processing gain"") is the ratio of the spread (or RF) bandwidth to the unspread (or baseband) bandwidth. It is usually expressed in decibels (dB). For example, if a 1 kHz signal is spread to 100 kHz, the process gain expressed as a numerical ratio would be / = 100. Or in decibels, 10 log10(100) = 20 dB. Note that process gain does not reduce the effects of wideband thermal noise. It can be shown that a direct-sequence spread-spectrum (DSSS) system has exactly the same bit error behavior as a non-spread-spectrum system with the same modulation format. Thus, on an additive white Gaussian noise (AWGN) channel without interference, a spread system requires the same transmitter power as an unspread system, all other things being equal. Unlike a conventional communication system, however, a DSSS system does have a certain resistance against narrowband interference, as the interference is not subject to the process gain of the DSSS signal, and hence the signal-to-interference ratio is improved. In frequency modulation (FM), the processing gain can be expressed as where: Gp is the processing gain, Bn is the noise bandwidth, Δf is the peak frequency deviation, W is the sinusoidal modulating frequency. Signal processing" https://en.wikipedia.org/wiki/Pulse%20width,"The pulse width is a measure of the elapsed time between the leading and trailing edges of a single pulse of energy. The measure is typically used with electrical signals and is widely used in the fields of radar and power supplies. There are two closely related measures. The pulse repetition interval measures the time between the leading edges of two pulses but is normally expressed as the pulse repetition frequency (PRF), the number of pulses in a given time, typically a second. The duty cycle expresses the pulse width as a fraction or percentage of one complete cycle. Pulse width is an important measure in radar systems. Radars transmit pulses of radio frequency energy out of an antenna and then listen for their reflection off of target objects. The amount of energy that is returned to the radar receiver is a function of the peak energy of the pulse, the pulse width, and the pulse repetition frequency. Increasing the pulse width increases the amount of energy reflected off the target and thereby increases the range at which an object can be detected. Radars measure range based on the time between transmission and reception, and the resolution of that measurement is a function of the length of the received pulse. This leads to the basic outcome that increasing the pulse width allows the radar to detect objects at longer range but at the cost of decreasing the accuracy of that range measurement. This can be addressed by encoding the pulse with additional information, as is the case in pulse compression systems. In modern switched-mode power supplies, the voltage of the output electrical power is controlled by rapidly switching a fixed-voltage source on and off and then smoothing the resulting stepped waveform. Increasing the pulse width increases the output voltage. This allows complex output waveforms to be constructed by rapidly changing the pulse width to produce the desired signal, a concept known as pulse-width modulation." https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29,"In physical chemistry and engineering, passivation is coating a material so that it becomes ""passive"", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called ""fouling"", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis. When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the ""native oxide layer"") or a nitride, that serves as a passivation layer. In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. (In contrast, metals such as iron oxidize readily to form a rough porous coating of rust that adheres loosely and sloughs off readily, allowing further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years. In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to e" https://en.wikipedia.org/wiki/Holborn%209100,"The Holborn 9100 was a personal computer introduced in 1981 by a small Dutch company called Holborn, designed by H.A. Polak. Very few of these devices were sold with Holborn going into bankruptcy on the 27 April 1983. The 9100 base module is a server, and 9120 is a terminal. Peripherals 30MB Hard Disk drive Light pen" https://en.wikipedia.org/wiki/Datakit,"Datakit is a virtual circuit switch which was developed by Sandy Fraser at Bell Labs for both local-area and wide-area networks, and in widespread deployment by the Regional Bell Operating Companies (RBOCs). Datakit uses a cell relay protocol similar to Asynchronous Transfer Mode. Datakit is a connection-oriented switch, with all packets for a particular call traveling through the network over the same virtual circuit. Datakit networks are still in widespread use by the major telephone companies in the United States. Interfaces to these networks include TCP/IP and UDP, X.25, asynchronous protocols and several synchronous protocols, such as SDLC, HDLC, Bisync and others. These networks support host to terminal traffic and vice versa, host-to-host traffic, file transfers, remote login, remote printing, and remote command execution. At the physical layer, it can operate over multiple media, from slow speed EIA-232 to 500Mbit fiber optic links including 10/100 Megabit ethernet links. Most of Bell Laboratories was trunked together via Datakit networking. On top of Datakit transport service, several operating systems (including UNIX) implemented UUCP for electronic mail and dkcu for remote login. Datakit uses an adaptation protocol called Universal Receiver Protocol (URP) that spreads PDU overhead across multiple cells and performs immediate packet processing. URP assumes that cells arrive in order and may force retransmissions if not. The Information Systems Network (ISN) was the pre-version of Datakit that was supported by the former AT&T Information Systems. The ISN was a packet switching network that was built similar to digital System 75 platform. LAN and WAN applications with the use of what was referred to as a Concentrator that was connected via fiber optics up to 15 miles away from the main ISN. The speeds of these connections were very slow to today's standards, from 1200 to 5600 baud with most connections / end users on dumb terminals. The main support fo" https://en.wikipedia.org/wiki/Upload,"Uploading refers to transmitting data from one computer system to another through means of a network. Common methods of uploading include: uploading via web browsers, FTP clients], and terminals (SCP/SFTP). Uploading can be used in the context of (potentially many) clients that send files to a central server. While uploading can also be defined in the context of sending files between distributed clients, such as with a peer-to-peer (P2P) file-sharing protocol like BitTorrent, the term file sharing is more often used in this case. Moving files within a computer system, as opposed to over a network, is called file copying. Uploading directly contrasts with downloading, where data is received over a network. In the case of users uploading files over the internet, uploading is often slower than downloading as many internet service providers (ISPs) offer asymmetric connections, which offer more network bandwidth for downloading than uploading. Definition To transfer something (such as data or files), from a computer or other digital device to the memory of another device (such as a larger or remote computer) especially via the internet. Historical development Remote file sharing first came into fruition in January 1978, when Ward Christensen and Randy Suess, who were members of the Chicago Area Computer Hobbyists' Exchange (CACHE), created the Computerized Bulletin Board System (CBBS). This used an early file transfer protocol (MODEM, later XMODEM) to send binary files via a hardware modem, accessible by another modem via a telephone number. In the following years, new protocols such as Kermit were released, until the File Transfer Protocol (FTP) was standardized 1985 (). FTP is based on TCP/IP and gave rise to many FTP clients, which, in turn, gave users all around the world access to the same standard network protocol to transfer data between devices. The transfer of data saw a significant increase in popularity after the release of the World Wide Web in 1991, wh" https://en.wikipedia.org/wiki/Seven%20Solutions,"Seven Solutions is a Spanish hardware technology company headquartered in Granada, Spain, that developed the first white rabbit element on The White Rabbit Project which it was the White Rabbit Switch to use the Precision Time Protocol (PTP) in real application as networking. Seven Solutions got involved on it with the design, manufacture, testing and support. This project was financed by The government of Spain and CERN. Through this project Seven Solution demonstrated a high performance enhanced PTP switch with sub-ns accuracy." https://en.wikipedia.org/wiki/Inductive%20coupling,"In electrical engineering, two conductors are said to be inductively coupled or magnetically coupled when they are configured in a way such that change in current through one wire induces a voltage across the ends of the other wire through electromagnetic induction. A changing current through the first wire creates a changing magnetic field around it by Ampere's circuital law. The changing magnetic field induces an electromotive force (EMF) voltage in the second wire by Faraday's law of induction. The amount of inductive coupling between two conductors is measured by their mutual inductance. The coupling between two wires can be increased by winding them into coils and placing them close together on a common axis, so the magnetic field of one coil passes through the other coil. Coupling can also be increased by a magnetic core of a ferromagnetic material like iron or ferrite in the coils, which increases the magnetic flux. The two coils may be physically contained in a single unit, as in the primary and secondary windings of a transformer, or may be separated. Coupling may be intentional or unintentional. Unintentional inductive coupling can cause signals from one circuit to be induced into a nearby circuit, this is called cross-talk, and is a form of electromagnetic interference. An inductively coupled transponder consists of a solid state transceiver chip connected to a large coil that functions as an antenna. When brought within the oscillating magnetic field of a reader unit, the transceiver is powered up by energy inductively coupled into its antenna and transfers data back to the reader unit inductively. Magnetic coupling between two magnets can also be used to mechanically transfer power without contact, as in the magnetic gear. Uses Inductive coupling is widely used throughout electrical technology; examples include: Electric motors and generators Inductive charging products Induction cookers and induction heating systems Induction loop " https://en.wikipedia.org/wiki/Gravitational%20contact%20terms,"In quantum field theory, a contact term is a radiatively induced point-like interaction. These typically occur when the vertex for the emission of a massless particle such as a photon, a graviton, or a gluon, is proportional to (the invariant momentum of the radiated particle). This factor cancels the of the Feynman propagator, and causes the exchange of the massless particle to produce a point-like -function effective interaction, rather than the usual long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a term, leading to what is known as a ""penguin"" interaction. The contact term then generates a correction to the full action of the theory. Contact terms occur in gravity when there are non-minimal interactions, , or in Brans-Dicke Theory, . The non-minimal couplings are quantum equivalent to an ""Einstein frame,"" with a pure Einstein-Hilbert action, , owing to gravitational contact terms. These arise classically from graviton exchange interactions. The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and ""frame ambiguities"" in loop calculations do not exist." https://en.wikipedia.org/wiki/Ethernet%20over%20USB,"Ethernet over USB is the use of a USB link as a part of an Ethernet network, resulting in an Ethernet connection over USB (instead of e.g. PCI or PCIe). USB over Ethernet (also called USB over Network or USB over IP) is a system to share USB-based devices over Ethernet, Wi-Fi, or the Internet, allowing access to devices over a network. It can be done across multiple network devices by using USB over Ethernet Hubs. Protocols There are numerous protocols for Ethernet-style networking over USB. The use of these protocols is to allow application-independent exchange of data with USB devices, instead of specialized protocols such as video or MTP (Media Transfer Protocol). Even though the USB is not a physical Ethernet, the networking stacks of all major operating systems are set up to transport IEEE 802.3 frames, without needing a particular underlying transport. The main industry protocols are (in chronological order): Remote NDIS (RNDIS, a Microsoft vendor protocol), Ethernet Control Model (ECM), Ethernet Emulation Model (EEM), and Network Control Model (NCM). The latter three are part of the larger Communications Device Class (CDC) group of protocols of the USB Implementers Forum (USB-IF). They are available for download from the USB-IF (see below). The RNDIS specification is available from Microsoft's web site. Regarding de facto standards, some standards, such as ECM, specify use of USB resources that early systems did not have. However, minor modifications of the standard subsets make practical implementations possible on such platforms. Remarkably, even some of the most modern platforms need minor accommodations and therefore support for these subsets is still needed. Of these protocols, ECM could be classified the simplest—frames are simply sent and received without modification one at a time. This was a satisfactory strategy for USB 1.1 systems (current when the protocol was issued) with 64 byte packets but not for USB 2.0 systems which use 512 byte packet" https://en.wikipedia.org/wiki/List%20of%20types%20of%20interferometers,"An interferometer is a device for extracting information from the superposition of multiple waves. Field and linear interferometers Air-wedge shearing interferometer Astronomical interferometer / Michelson stellar interferometer Classical interference microscopy Bath interferometer (common path) Cyclic interferometer Diffraction-grating interferometer (white light) Double-slit interferometer Dual-polarization interferometry Fabry–Pérot interferometer Fizeau interferometer Fourier-transform interferometer Fresnel interferometer (e.g. Fresnel biprism, Fresnel mirror or Lloyd's mirror) Fringes of Equal Chromatic Order interferometer (FECO) Gabor hologram Gires–Tournois etalon Heterodyne interferometer (see heterodyne) Holographic interferometer Jamin interferometer Laser Doppler vibrometer Linnik interferometer (microscopy) LUPI variant of Michelson Lummer–Gehrcke interferometer Mach–Zehnder interferometer Martin–Puplett interferometer Michelson interferometer Mirau interferometer (also known as a Mirau objective) (microscopy) Moiré interferometer (see moiré pattern) Multi-beam interferometer (microscopy) Near-field interferometer Newton interferometer (see Newton's rings) Nomarski interferometer Nonlinear Michelson interferometer / Step-phase Michelson interferometer N-slit interferometer Phase-shifting interferometer Planar lightwave circuit interferometer (PLC) Photon Doppler velocimeter interferometer (PDV) Polarization interferometer (see also Babinet–Soleil compensator) Point diffraction interferometer Rayleigh interferometer Sagnac interferometer Schlieren interferometer (phase-shifting) Shearing interferometer (lateral and radial) Twyman–Green interferometer Talbot–Lau interferometer Watson interferometer (microscopy) White-light interferometer (see also Optical coherence tomography, White light interferometry, and Coherence Scanning Interferometry) White-light scatterplate interferometer (white-light) (microscopy) Young's double-slit interferometer Zernik" https://en.wikipedia.org/wiki/First-order%20hold,"First-order hold (FOH) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH (or, more commonly, the zero-order hold) is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, xs(t), representing the discrete samples, x(nT), is low-pass filtered to recover the original signal that was sampled, x(t). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH. Even though this is not what is physically done, an identical output can be generated by applying the hypothetical sequence of Dirac impulses, xs(t), to a linear time-invariant system, otherwise known as a linear filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct piecewise linear function in the output. Basic first-order hold First-order hold is the hypothetical filter or LTI system that converts the ideally sampled signal {| |- | | |- | | |} to the piecewise linear signal resulting in an effective impulse response of where is the triangular function. The effective frequency response is the continuous Fourier transform of the impulse response. {| |- | | |- | | |- | | |} where is the normalized sinc function. The Laplace transform transfer function of FOH is found by substituting s = i 2 π f: {| |- | | |- | | |} This is an acausal system in that the linear interpolation function moves toward the value of the next sample before such sample is applied to the hypothetical FOH filter. Delayed first-order ho" https://en.wikipedia.org/wiki/Magnetofection,"Magnetofection is a transfection method that uses magnetic fields to concentrate particles containing vectors to target cells in the body. Magnetofection has been adapted to a variety of vectors, including nucleic acids, non-viral transfection systems, and viruses. This method offers advantages such as high transfection efficiency and biocompatibility which are balanced with limitations. Mechanism Principle The term magnetofection, currently trademarked by the company OZ Biosciences, combines the words magnetic and transfection. Magnetofection uses nucleic acids associated with magnetic nanoparticles. These molecular complexes are then concentrated and transported into cells using an applied magnetic field. Synthesis The magnetic nanoparticles are typically made from iron oxide, which is fully biodegradable, using methods such as coprecipitation or microemulsion. The nanoparticles are then combined with gene vectors (DNA, siRNA, ODN, virus, etc.). One method involves linking viral particles to magnetic particles using an avidin-biotin interaction. Viruses can also bind to the nanoparticles via hydrophobic interaction. Another synthesis method involves coating magnetic nanoparticles with cationic lipids or polymers via salt-induced aggregation. For example, nanoparticles may be conjugated with the polyethylenimine (PEI), a positively charged polymer used commonly as a transfection agent. The PEI solution must have a high pH during synthesis to encourage high gene expression. The positively charged nanoparticles can then associate with negatively charged nucleic acids via electrostatic interaction. Cellular uptake Magnetic particles loaded with vectors are concentrated on the target cells by the influence of an external magnetic field. The cells then take up genetic material naturally via endocytosis and pinocytosis. Consequently, membrane architecture and structure stays intact, in contrast to other physical transfection methods such as electroporation or ge" https://en.wikipedia.org/wiki/Rolled%20oats,"Rolled oats are a type of lightly processed whole-grain food. They are made from oat groats that have been dehusked and steamed, before being rolled into flat flakes under heavy rollers and then stabilized by being lightly toasted. Thick-rolled oats usually remain unbroken during processing, while thin-rolled oats often become fragmented. Rolled whole oats, without further processing, can be cooked into a porridge and eaten as old-fashioned oats or Scottish oats; when the oats are rolled thinner and steam-cooked more in the factory, they will later absorb water much more easily and cook faster into a porridge, and when processed this way are sometimes called ""quick"" or ""instant"" oats. Rolled oats are most often the main ingredient in granola and muesli. They can be further processed into a coarse powder, which breaks down to nearly a liquid consistency when boiled. Cooked oatmeal powder is often used as baby food. Process The oat, like other cereals, has a hard, inedible outer husk that must be removed before the grain can be eaten. After the outer husk (or chaff) has been removed from the still bran-covered oat grains, the remainder is called oat groats. Since the bran layer, though nutritious, makes the grains tougher to chew and contains an enzyme that can cause the oats to go rancid, raw oat groats are often further steam-treated to soften them for a quicker cooking time and to denature the enzymes for a longer shelf life. Steel-cut or pinhead oats Steel-cut oats (sometimes called ""pinhead oats"", especially if cut small) are oat groats that have been chopped by a sharp-bladed machine before any steaming, and thus retain bits of the bran layer. Preparation Rolled oats can be eaten without further heating or cooking, if they are soaked for 1–6 hours in water-based liquid, such as water, milk, or plant-based dairy substitutes. The required soaking duration depends on shape, size and pre-processing technique. Whole oat groats can be cooked as a breakfast ce" https://en.wikipedia.org/wiki/Mathematics%20Genealogy%20Project,"The Mathematics Genealogy Project (MGP) is a web-based database for the academic genealogy of mathematicians. it contained information on 274,575 mathematical scientists who contributed to research-level mathematics. For a typical mathematician, the project entry includes graduation year, thesis title (in its Mathematics Subject Classification), alma mater, doctoral advisor, and doctoral students. Origin of the database The project grew out of founder Harry Coonce's desire to know the name of his advisor's advisor. Coonce was Professor of Mathematics at Minnesota State University, Mankato, at the time of the project's founding, and the project went online there in fall 1997. Coonce retired from Mankato in 1999, and in fall 2002 the university decided that it would no longer support the project. The project relocated at that time to North Dakota State University. Since 2003, the project has also operated under the auspices of the American Mathematical Society and in 2005 it received a grant from the Clay Mathematics Institute. Harry Coonce has been assisted by Mitchel T. Keller, Assistant Professor at Morningside College. Keller is currently the Managing Director of the project. Mission and scope The Mathematics Genealogy Mission statement: ""Throughout this project when we use the word 'mathematics' or 'mathematician' we mean that word in a very inclusive sense. Thus, all relevant data from statistics, computer science, philosophy or operations research is welcome."" Scope The genealogy information is obtained from sources such as Dissertation Abstracts International and Notices of the American Mathematical Society, but may be supplied by anyone via the project's website. The searchable database contains the name of the mathematician, university which awarded the degree, year when the degree was awarded, title of the dissertation, names of the advisor and second advisor, a flag of the country where the degree was awarded, a listing of doctoral students, and a cou" https://en.wikipedia.org/wiki/Dissection,"Dissection (from Latin ""to cut to pieces""; also called anatomization) is the dismembering of the body of a deceased animal or plant to study its anatomical structure. Autopsy is used in pathology and forensic medicine to determine the cause of death in humans. Less extensive dissection of plants and smaller animals preserved in a formaldehyde solution is typically carried out or demonstrated in biology and natural science classes in middle school and high school, while extensive dissections of cadavers of adults and children, both fresh and preserved are carried out by medical students in medical schools as a part of the teaching in subjects such as anatomy, pathology and forensic medicine. Consequently, dissection is typically conducted in a morgue or in an anatomy lab. Dissection has been used for centuries to explore anatomy. Objections to the use of cadavers have led to the use of alternatives including virtual dissection of computer models. In the field of surgery, the term ""dissection"" or ""dissecting"" means more specifically to the practice of separating an anatomical structure (an organ, nerve or blood vessel) from its surrounding connective tissue in order to minimize unwanted damage during a surgical procedure. Overview Plant and animal bodies are dissected to analyze the structure and function of its components. Dissection is practised by students in courses of biology, botany, zoology, and veterinary science, and sometimes in arts studies. In medical schools, students dissect human cadavers to learn anatomy. Zoötomy is sometimes used to describe ""dissection of an animal"". Human dissection A key principle in the dissection of human cadavers (sometimes called androtomy) is the prevention of human disease to the dissector. Prevention of transmission includes the wearing of protective gear, ensuring the environment is clean, dissection technique and pre-dissection tests to specimens for the presence of HIV and hepatitis viruses. Specimens are dissected" https://en.wikipedia.org/wiki/Data%20acquisition,"Data acquisition is the process of sampling signals that measure real-world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems, abbreviated by the acronyms DAS, DAQ, or DAU, typically convert analog waveforms into digital values for processing. The components of data acquisition systems include: Sensors, to convert physical parameters to electrical signals. Signal conditioning circuitry, to convert sensor signals into a form that can be converted to digital values. Analog-to-digital converters, to convert conditioned sensor signals to digital values. Data acquisition applications are usually controlled by software programs developed using various general purpose programming languages such as Assembly, BASIC, C, C++, C#, Fortran, Java, LabVIEW, Lisp, Pascal, etc. Stand-alone data acquisition systems are often called data loggers. There are also open-source software packages providing all the necessary tools to acquire data from different, typically specific, hardware equipment. These tools come from the scientific community where complex experiment requires fast, flexible, and adaptable software. Those packages are usually custom-fit but more general DAQ packages like the Maximum Integrated Data Acquisition System can be easily tailored and are used in several physics experiments. History In 1963, IBM produced computers that specialized in data acquisition. These include the IBM 7700 Data Acquisition System, and its successor, the IBM 1800 Data Acquisition and Control System. These expensive specialized systems were surpassed in 1974 by general-purpose S-100 computers and data acquisition cards produced by Tecmar/Scientific Solutions Inc. In 1981 IBM introduced the IBM Personal Computer and Scientific Solutions introduced the first PC data acquisition products. Methodology Sources and systems Data acquisition begins with the physical phenomenon or physical prop" https://en.wikipedia.org/wiki/Service%20Data%20Objects,"Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture. Relation to other technologies Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET. Design Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them. Developers The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry. Implementations The following SDO products are available: Rogue Wave Software HydraSDO Xcalia (for Java and .Net) Oracle (Data Service Integrator) IBM (Virtual XML Garden) IBM (WebSphere Process Server) There are open source implementations of SDO from: The Eclipse Persistence Services Project (EclipseLink) The Apache Tuscany project for Java and C++ The fcl-sdo library included with " https://en.wikipedia.org/wiki/Mathematics%20and%20architecture,"Mathematics and architecture are related, since, as with other arts, architects use mathematics for several reasons. Apart from the mathematics needed when engineering buildings, architects use geometry: to define the spatial form of a building; from the Pythagoreans of the sixth century BC onwards, to create forms considered harmonious, and thus to lay out buildings and their surroundings according to mathematical, aesthetic and sometimes religious principles; to decorate buildings with mathematical objects such as tessellations; and to meet environmental goals, such as to minimise wind speeds around the bases of tall buildings. In ancient Egypt, ancient Greece, India, and the Islamic world, buildings including pyramids, temples, mosques, palaces and mausoleums were laid out with specific proportions for religious reasons. In Islamic architecture, geometric shapes and geometric tiling patterns are used to decorate buildings, both inside and outside. Some Hindu temples have a fractal-like structure where parts resemble the whole, conveying a message about the infinite in Hindu cosmology. In Chinese architecture, the tulou of Fujian province are circular, communal defensive structures. In the twenty-first century, mathematical ornamentation is again being used to cover public buildings. In Renaissance architecture, symmetry and proportion were deliberately emphasized by architects such as Leon Battista Alberti, Sebastiano Serlio and Andrea Palladio, influenced by Vitruvius's De architectura from ancient Rome and the arithmetic of the Pythagoreans from ancient Greece. At the end of the nineteenth century, Vladimir Shukhov in Russia and Antoni Gaudí in Barcelona pioneered the use of hyperboloid structures; in the Sagrada Família, Gaudí also incorporated hyperbolic paraboloids, tessellations, catenary arches, catenoids, helicoids, and ruled surfaces. In the twentieth century, styles such as modern architecture and Deconstructivism explored different geometries to achi" https://en.wikipedia.org/wiki/Gracility,"Gracility is slenderness, the condition of being gracile, which means slender. It derives from the Latin adjective gracilis (masculine or feminine), or gracile (neuter), which in either form means slender, and when transferred for example to discourse takes the sense of ""without ornament"", ""simple"" or various similar connotations. In Glossary of Botanic Terms, B. D. Jackson speaks dismissively of an entry in earlier dictionary of A. A. Crozier as follows: ""Gracilis (Lat.), slender. Crozier has the needless word 'gracile'"". However, his objection would be hard to sustain in current usage; apart from the fact that gracile is a natural and convenient term, it is hardly a neologism. The Shorter Oxford English Dictionary gives the source date for that usage as 1623 and indicates the word is misused (through association with grace) for ""gracefully slender"". This misuse is unfortunate at least, because the terms gracile and grace are unrelated: the etymological root of grace is the Latin word gratia from gratus, meaning 'pleasing', and has nothing to do with slenderness or thinness. In biology In biology, the term is in common use, whether as English or Latin: The term gracile—and its opposite, robust—occur in discussion of the morphology of various hominids for example. The gracile fasciculus is a particular bundle of axon fibres in the spinal cord The gracile nucleus is a particular structure of neurons in the medulla oblongata ""GRACILE syndrome"", is associated with a BCS1L mutation In biological taxonomy, gracile is the specific name or specific epithet for various species. Where the gender is appropriate, the form is gracilis. Examples include: Campylobacter gracilis, a species of bacterium implicated in foodborne disease Ctenochasma gracile, a late Jurassic pterosaur Eriophorum gracile, a species of sedge, Cyperaceae Euglena gracilis, a unicellular flagellate protist Hydrophis gracilis, a species of sea snakes Melampodium gracile, a flowering plant s" https://en.wikipedia.org/wiki/List%20of%20laser%20articles,"This is a list of laser topics. A 3D printing, additive manufacturing Abnormal reflection Above-threshold ionization Absorption spectroscopy Accelerator physics Acoustic microscopy Acousto-optic deflector Acousto-optic modulator Acousto-optical spectrometer Acousto-optics Active laser medium Active optics Advanced Precision Kill Weapon System Advanced Tactical Laser Afocal system Airborne laser Airborne wind turbine Airy beam ALKA All gas-phase iodine laser Ambient ionization Amplified spontaneous emission Analytical chemistry Aneutronic fusion Antiproton Decelerator Apache Arrowhead Apache Point Observatory Lunar Laser-ranging Operation Arago spot Argon fluoride laser Argus laser Asterix IV laser Astrophysical maser Atmospheric-pressure laser ionization Atom interferometer Atom laser Atom probe Atomic clock Atomic coherence Atomic fountain Atomic line filter Atomic ratio Atomic spectroscopy Atomic vapor laser isotope separation Audience scanning Autler–Townes effect Autologous patient-specific tumor antigen response Automated guided vehicle Autonomous cruise control system Avalanche photodiode Axicon B Babinet's principle Ballistic photon Bandwidth-limited pulse Bandwidth (signal processing) Barcode reader Basir Beam-powered propulsion Beam diameter Beam dump Beam expander Beam homogenizer Beam parameter product Beamz Big Bang Observer Biophotonics Biosensor Black silicon Blood irradiation therapy Blu-ray Disc Blue laser Boeing Laser Avenger Boeing NC-135 Boeing YAL-1 Bubblegram C CLidar CALIPSO, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations Calligraphic projection Calutron Carbon dioxide laser Carrier generation and recombination Catastrophic optical damage Cauterization Cavity ring-down laser absorption spectroscopy Ceilometer Chaos in optical systems Chemical laser Chemical oxygen iodine laser Chirped mirror Chirped pulse amplification Clementine (sp" https://en.wikipedia.org/wiki/Memory%20disambiguation,"Memory disambiguation is a set of techniques employed by high-performance out-of-order execution microprocessors that execute memory access instructions (loads and stores) out of program order. The mechanisms for performing memory disambiguation, implemented using digital logic inside the microprocessor core, detect true dependencies between memory operations at execution time and allow the processor to recover when a dependence has been violated. They also eliminate spurious memory dependencies and allow for greater instruction-level parallelism by allowing safe out-of-order execution of loads and stores. Background Dependencies When attempting to execute instructions out of order, a microprocessor must respect true dependencies between instructions. For example, consider a simple true dependence: 1: add $1, $2, $3 # R1 <= R2 + R3 2: add $5, $1, $4 # R5 <= R1 + R4 (dependent on 1) In this example, the add instruction on line 2 is dependent on the add instruction on line 1 because the register R1 is a source operand of the addition operation on line 2. The add on line 2 cannot execute until the add on line 1 completes. In this case, the dependence is static and easily determined by a microprocessor, because the sources and destinations are registers. The destination register of the add instruction on line 1 (R1) is part of the instruction encoding, and so can be determined by the microprocessor early on, during the decode stage of the pipeline. Similarly, the source registers of the add instruction on line 2 (R1 and R4) are also encoded into the instruction itself and are determined in decode. To respect this true dependence, the microprocessor's scheduler logic will issue these instructions in the correct order (instruction 1 first, followed by instruction 2) so that the results of 1 are available when instruction 2 needs them. Complications arise when the dependence is not statically determinable. Such non-static dependencies arise with mem" https://en.wikipedia.org/wiki/Sister%20group,"In phylogenetics, a sister group or sister taxon, also called an adelphotaxon, comprises the closest relative(s) of another given unit in an evolutionary tree. Definition The expression is most easily illustrated by a cladogram: Taxon A and taxon B are sister groups to each other. Taxa A and B, together with any other extant or extinct descendants of their most recent common ancestor (MRCA), form a monophyletic group, the clade AB. Clade AB and taxon C are also sister groups. Taxa A, B, and C, together with all other descendants of their MRCA form the clade ABC. The whole clade ABC is itself a subtree of a larger tree which offers yet more sister group relationships, both among the leaves and among larger, more deeply rooted clades. The tree structure shown connects through its root to the rest of the universal tree of life. In cladistic standards, taxa A, B, and C may represent specimens, species, genera, or any other taxonomic units. If A and B are at the same taxonomic level, terminology such as sister species or sister genera can be used. Example The term sister group is used in phylogenetic analysis, however, only groups identified in the analysis are labeled as ""sister groups"". An example is birds, whose commonly cited living sister group is the crocodiles, but that is true only when discussing extant organisms; when other, extinct groups are considered, the relationship between birds and crocodiles appears distant. Although the bird family tree is rooted in the dinosaurs, there were a number of other, earlier groups, such as the pterosaurs, that branched off of the line leading to the dinosaurs after the last common ancestor of birds and crocodiles. The term sister group must thus be seen as a relative term, with the caveat that the sister group is only the closest relative among the groups/species/specimens that are included in the analysis. Notes" https://en.wikipedia.org/wiki/List%20of%20Fourier-related%20transforms,"This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized in the frequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a single frequency component. Continuous transforms Applied to functions of continuous arguments, Fourier-related transforms include: Two-sided Laplace transform Mellin transform, another closely related integral transform Laplace transform Fourier transform, with special cases: Fourier series When the input function/waveform is periodic, the Fourier transform output is a Dirac comb function, modulated by a discrete sequence of finite-valued coefficients that are complex-valued in general. These are called Fourier series coefficients. The term Fourier series actually refers to the inverse Fourier transform, which is a sum of sinusoids at discrete frequencies, weighted by the Fourier series coefficients. When the non-zero portion of the input function has finite duration, the Fourier transform is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the portion that was analyzed. The same discrete set is obtained by treating the duration of the segment as one period of a periodic function and computing the Fourier series coefficients. Sine and cosine transforms: When the input function has odd or even symmetry around the origin, the Fourier transform reduces to a sine or cosine transform. Hartley transform Short-time Fourier transform (or short-term Fourier transform) (STFT) Rectangular mask short-time Fourier transform Chirplet transform Fractional Fourier transform (FRFT) Hankel transform: related to the Fourier Transform of radial functions. Fourier–Bros–Iagolnitzer transform Linear canonical t" https://en.wikipedia.org/wiki/N-philes,"N-philes are group of radical molecules which are specifically attracted to the C=N bonds, defying often the selectivity rules of electrophilic attack. N-philes can often masquerade as electrophiles, where acyl radicals are excellent examples which interact with pi electrons of aryl groups." https://en.wikipedia.org/wiki/Trustworthy%20computing,"The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002. History Until 1995, there were restrictions on commercial traffic over the Internet. On, May 26, 1995, Bill Gates sent the ""Internet Tidal Wave"" memorandum to Microsoft executives assigning ""...the Internet this highest level of importance..."" but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise but by mid 1995, they were testing their own web server, and on August 24, 1995, launched a major online service, MSN. The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, ""Trust in Cyberspace"". This report reviews the cost of un-trustworthy systems and identifies actions required for improvement. Microsoft and Trustworthy Computing Bill Gates launched Microsoft's ""Trustworthy Computing"" initiative with a January 15, 2002 memo, referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie. The move was reportedly prompted by the fact that they ""...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks."" such as Code Red, Nimda, Klez and Slammer. Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having ""...made a positive impact on the industry..."". The Trustworthy Computing campaign was t" https://en.wikipedia.org/wiki/Logic%20optimization,"Logic optimization is a process of finding an equivalent representation of the specified logic circuit under one or more specified constraints. This process is a part of a logic synthesis applied in digital electronics and integrated circuit design. Generally, the circuit is constrained to a minimum chip area meeting a predefined response delay. The goal of logic optimization of a given circuit is to obtain the smallest logic circuit that evaluates to the same values as the original one. Usually, the smaller circuit with the same function is cheaper, takes less space, consumes less power, has shorter latency, and minimizes risks of unexpected cross-talk, hazard of delayed signal processing, and other issues present at the nano-scale level of metallic structures on an integrated circuit. In terms of Boolean algebra, the optimization of a complex boolean expression is a process of finding a simpler one, which would upon evaluation ultimately produce the same results as the original one. Motivation The problem with having a complicated circuit (i.e. one with many elements, such as logic gates) is that each element takes up physical space and costs time and money to produce. Circuit minimization may be one form of logic optimization used to reduce the area of complex logic in integrated circuits. With the advent of logic synthesis, one of the biggest challenges faced by the electronic design automation (EDA) industry was to find the most simple circuit representation of the given design description. While two-level logic optimization had long existed in the form of the Quine–McCluskey algorithm, later followed by the Espresso heuristic logic minimizer, the rapidly improving chip densities, and the wide adoption of Hardware description languages for circuit description, formalized the logic optimization domain as it exists today, including Logic Friday (graphical interface), Minilog, and ESPRESSO-IISOJS (many-valued logic). Methods The methods of logic circuit sim" https://en.wikipedia.org/wiki/List%20of%20spherical%20symmetry%20groups,"Finite spherical symmetry groups are also called point groups in three dimensions. There are five fundamental symmetry classes which have triangular fundamental domains: dihedral, cyclic, tetrahedral, octahedral, and icosahedral symmetry. This article lists the groups by Schoenflies notation, Coxeter notation, orbifold notation, and order. John Conway uses a variation of the Schoenflies notation, based on the groups' quaternion algebraic structure, labeled by one or two upper case letters, and whole number subscripts. The group order is defined as the subscript, unless the order is doubled for symbols with a plus or minus, ""±"", prefix, which implies a central inversion. Hermann–Mauguin notation (International notation) is also given. The crystallography groups, 32 in total, are a subset with element orders 2, 3, 4 and 6. Involutional symmetry There are four involutional groups: no symmetry (C1), reflection symmetry (Cs), 2-fold rotational symmetry (C2), and central point symmetry (Ci). Cyclic symmetry There are four infinite cyclic symmetry families, with n = 2 or higher. (n may be 1 as a special case as no symmetry) Dihedral symmetry There are three infinite dihedral symmetry families, with n = 2 or higher (n may be 1 as a special case). Polyhedral symmetry There are three types of polyhedral symmetry: tetrahedral symmetry, octahedral symmetry, and icosahedral symmetry, named after the triangle-faced regular polyhedra with these symmetries. Continuous symmetries All of the discrete point symmetries are subgroups of certain continuous symmetries. They can be classified as products of orthogonal groups O(n) or special orthogonal groups SO(n). O(1) is a single orthogonal reflection, dihedral symmetry order 2, Dih1. SO(1) is just the identity. Half turns, C2, are needed to complete. See also Crystallographic point group Triangle group List of planar symmetry groups Point groups in two dimensions" https://en.wikipedia.org/wiki/ONAP,"ONAP (Open Network Automation Platform), is an open-source, orchestration and automation framework. It is hosted by The Linux Foundation. History On February 23, 2017, ONAP was announced as a result of a merger of the OpenECOMP and Open-Orchestrator (Open-O) projects. The goal of the project is to develop a widely used platform for orchestrating and automating physical and virtual network elements, with full lifecycle management. ONAP was formed as a merger of OpenECOMP, the open source version of AT&T's ECOMP project, and the Open-Orchestrator project, a project begun under the aegis of the Linux Foundation with China Mobile, Huawei and ZTE as lead contributors. The merger brought together both sets of source code and their developer communities, who then elaborated a common architecture for the new project. The first release of the combined ONAP architecture, code named ""Amsterdam"", was announced on November 20, 2017. The next release (""Beijing"") was released on June 12, 2018. As of January, 2018, ONAP became a project within the LF Networking Fund, which consolidated membership across multiple projects into a common governance structure. Most ONAP members became members of the new LF Networking fund. Overview ONAP provides a platform for real-time, policy-driven orchestration and automation of physical and virtual network functions that will enable software, network, IT and cloud providers and developers to rapidly automate new services and support complete lifecycle management. ONAP incorporates or collaborates with other open-source projects, including OpenDaylight, FD.io, OPNFV and others. Contributing organizations include AT&T, Samsung, Nokia, Ericsson, Orange, Huawei, Intel, IBM and more. Architecture" https://en.wikipedia.org/wiki/Symbolic%20language%20%28programming%29,"In computer science, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations and the entities (or operands) on which these operations are performed. Modern programming languages use symbols to represent concepts and/or data and are therefore, examples of symbolic languages. Some programming languages (such as Lisp and Mathematica) make it easy to represent higher-level abstractions as expressions in the language, enabling symbolic programming., See also Mathematical notation Notation (general) Programming language specification Symbol table Symbolic language (other)" https://en.wikipedia.org/wiki/Cross-recurrence%20quantification,"Cross-recurrence quantification (CRQ) is a non-linear method that quantifies how similarly two observed data series unfold over time. CRQ produces measures reflecting coordination, such as how often two data series have similar values or reflect similar system states (called percentage recurrence, or %REC), among other measures." https://en.wikipedia.org/wiki/Signal%20analyzer,"A signal analyzer is an instrument that measures the magnitude and phase of the input signal at a single frequency within the IF bandwidth of the instrument. It employs digital techniques to extract useful information that is carried by an electrical signal. In common usage the term is related to both spectrum analyzers and vector signal analyzers. While spectrum analyzers measure the amplitude or magnitude of signals, a signal analyzer with appropriate software or programming can measure any aspect of the signal such as modulation. Today’s high-frequency signal analyzers achieve good performance by optimizing both the analog front end and the digital back end. Theory of operation Modern signal analyzers use a superheterodyne receiver to downconvert a portion of the signal spectrum for analysis. As shown in the figure to the right, the signal is first converted to an intermediate frequency and then filtered in order to band-limit the signal and prevent aliasing. The downconversion can operate in a swept-tuned mode similar to a traditional spectrum analyzer, or in a fixed-tuned mode. In the fixed-tuned mode the range of frequencies downconverted does not change and the downconverter output is then digitized for further analysis. The digitizing process typically involves in-phase/quadrature (I/Q) or complex sampling so that all characteristics of the signal are preserved, as opposed to the magnitude-only processing of a spectrum analyzer. The sampling rate of the digitizing process may be varied in relation to the frequency span under consideration or (more typically) the signal may be digitally resampled. Typical usage Signal analyzers can perform the operations of both spectrum analyzers and vector signal analyzers. A signal analyzer can be viewed as a measurement platform, with operations such as spectrum analysis (including phase noise, power, and distortion) and vector signal analysis (including demodulation or modulation quality analysis) performed as m" https://en.wikipedia.org/wiki/Ambiguity%20function,"In pulsed radar and sonar signal processing, an ambiguity function is a two-dimensional function of propagation delay and Doppler frequency , . It represents the distortion of a returned pulse due to the receiver matched filter (commonly, but not exclusively, used in pulse compression radar) of the return from a moving target. The ambiguity function is defined by the properties of the pulse and of the filter, and not any particular target scenario. Many definitions of the ambiguity function exist; some are restricted to narrowband signals and others are suitable to describe the delay and Doppler relationship of wideband signals. Often the definition of the ambiguity function is given as the magnitude squared of other definitions (Weiss). For a given complex baseband pulse , the narrowband ambiguity function is given by where denotes the complex conjugate and is the imaginary unit. Note that for zero Doppler shift (), this reduces to the autocorrelation of . A more concise way of representing the ambiguity function consists of examining the one-dimensional zero-delay and zero-Doppler ""cuts""; that is, and , respectively. The matched filter output as a function of time (the signal one would observe in a radar system) is a Doppler cut, with the constant frequency given by the target's Doppler shift: . Background and motivation Pulse-Doppler radar equipment sends out a series of radio frequency pulses. Each pulse has a certain shape (waveform)—how long the pulse is, what its frequency is, whether the frequency changes during the pulse, and so on. If the waves reflect off a single object, the detector will see a signal which, in the simplest case, is a copy of the original pulse but delayed by a certain time —related to the object's distance—and shifted by a certain frequency —related to the object's velocity (Doppler shift). If the original emitted pulse waveform is , then the detected signal (neglecting noise, attenuation, and distortion, and wideband correctio" https://en.wikipedia.org/wiki/User-in-the-loop,"User-in-the-Loop (UIL) refers to the notion that a technology (e.g., network) can improve a performance objective by engaging its human users (Layer 8). The idea can be applied in various technological fields. UIL assumes that human users of a network are among the smartest but also most unpredictable units of that network. Furthermore, human users often have a certain set of (input) values that they sense (more or less observe, but also acoustic or haptic feedback is imaginable: imagine a gas pedal in a car giving some resistance, like for a speedomat). Both elements of smart decision-making and observed values can help towards improving the bigger objective. The input values are meant to encourage/discourage human users to behave in certain ways that improve the overall performance of the system. One example of a historic implementation related to UIL has appeared in electric power networks where a price chart is introduced to users of electrical power. This price chart differentiates the values of electricity based on off-peak, mid-peak and on-peak periods, for instance. Faced with a non-homogenous pattern of pricing, human users respond by changing their power consumption accordingly that eventually leads to the overall improvement of access to electrical power (reduce peak hour consumption). Recently, UIL has been also introduced for wireless telecommunications (cellular networks). Wireless resources including the bandwidth (frequency) are an increasingly scarce resource and the while current demand on wireless network is below the supply in most of the times (potentials capacity of the wireless links based on technology limitations), the rapid and exponential increase in demand will render wireless access an increasingly expensive resource in a matter of few years. While usual technological responses to this perspective such as innovative new generations of cellular systems, more efficient resource allocations, cognitive radio and machine learning are certa" https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra,"In geometry, a uniform polyhedron is a polyhedron which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the polyhedron has a high degree of reflectional and rotational symmetry. Uniform polyhedra can be divided between convex forms with convex regular polygon faces and star forms. Star forms have either regular star polygon faces or vertex figures or both. This list includes these: all 75 nonprismatic uniform polyhedra; a few representatives of the infinite sets of prisms and antiprisms; one degenerate polyhedron, Skilling's figure with overlapping edges. It was proven in that there are only 75 uniform polyhedra other than the infinite families of prisms and antiprisms. John Skilling discovered an overlooked degenerate example, by relaxing the condition that only two faces may meet at an edge. This is a degenerate uniform polyhedron rather than a uniform polyhedron, because some pairs of edges coincide. Not included are: The uniform polyhedron compounds. 40 potential uniform polyhedra with degenerate vertex figures which have overlapping edges (not counted by Coxeter); The uniform tilings (infinite polyhedra) 11 Euclidean convex uniform tilings; 28 Euclidean nonconvex or apeirogonal uniform tilings; Infinite number of uniform tilings in hyperbolic plane. Any polygons or 4-polytopes Indexing Four numbering schemes for the uniform polyhedra are in common use, distinguished by letters: [C] Coxeter et al., 1954, showed the convex forms as figures 15 through 32; three prismatic forms, figures 33–35; and the nonconvex forms, figures 36–92. [W] Wenninger, 1974, has 119 figures: 1–5 for the Platonic solids, 6–18 for the Archimedean solids, 19–66 for stellated forms including the 4 regular nonconvex polyhedra, and ended with 67–119 for the nonconvex uniform polyhedra. [K] Kaleido, 1993: The 80 figure" https://en.wikipedia.org/wiki/Abstract%20nonsense,"In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are nonderogatory terms used by mathematicians to describe long, theoretical parts of a proof they skip over when readers are expected to be familiar with them. These terms are mainly used for abstract methods related to category theory and homological algebra. More generally, ""abstract nonsense"" may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself. Background Roughly speaking, category theory is the study of the general form, that is, categories of mathematical theories, without regard to their content. As a result, mathematical proofs that rely on category-theoretic ideas often seem out-of-context, somewhat akin to a non sequitur. Authors sometimes dub these proofs ""abstract nonsense"" as a light-hearted way of alerting readers to their abstract nature. Labeling an argument ""abstract nonsense"" is usually not intended to be derogatory, and is instead used jokingly, in a self-deprecating way, affectionately, or even as a compliment to the generality of the argument. Certain ideas and constructions in mathematics share a uniformity throughout many domains, unified by category theory. Typical methods include the use of classifying spaces and universal properties, use of the Yoneda lemma, natural transformations between functors, and diagram chasing. When an audience can be assumed to be familiar with the general form of such arguments, mathematicians will use the expression ""Such and such is true by abstract nonsense"" rather than provide an elaborate explanation of particulars. For example, one might say that ""By abstract nonsense, products are unique up to isomorphism when they exist"", instead of arguing about how these isomorphisms can be derived from the universal property that defines the product. This allows one to skip proof details that can be considered trivial or not providing much insi" https://en.wikipedia.org/wiki/PC/104,"PC/104 (or PC104) is a family of embedded computer standards which define both form factors and computer buses by the PC/104 Consortium. Its name derives from the 104 pins on the interboard connector (ISA) in the original PC/104 specification and has been retained in subsequent revisions, despite changes to connectors. PC/104 is intended for specialized environments where a small, rugged computer system is required. The standard is modular, and allows consumers to stack together boards from a variety of COTS manufacturers to produce a customized embedded system. The original PC/104 form factor is somewhat smaller than a desktop PC motherboard at . Unlike other popular computer form factors such as ATX, which rely on a motherboard or backplane, PC/104 boards are stacked on top of each other like building blocks. The PC/104 specification defines four mounting holes at the corners of each module, which allow the boards to be fastened to each other using standoffs. The stackable bus connectors and use of standoffs provides a more rugged mounting than slot boards found in desktop PCs. The compact board size further contributes to the ruggedness of the form factor by reducing the possibility of PCB flexing under shock and vibration. A typical PC/104 system (commonly referred to as a ""stack"") will include a CPU board, power supply board, and one or more peripheral boards, such as a data acquisition module, GPS receiver, or Wireless LAN controller. A wide array of peripheral boards are available from various vendors. Users may design a stack that incorporates boards from multiple vendors. The overall height, weight, and power consumption of the stack can vary depending on the number of boards that are used. PC/104 is sometimes referred to as a ""stackable PC"", as most of the architecture derives from the desktop PC. The majority of PC/104 CPU boards are x86 compatible and include standard PC interfaces such as Serial Ports, USB, Ethernet, and VGA. A x86 PC/104 s" https://en.wikipedia.org/wiki/Error%20concealment,"Error concealment is a technique used in signal processing that aims to minimize the deterioration of signals caused by missing data, called packet loss. A signal is a message sent from a transmitter to a receiver in multiple small packets. Packet loss occurs when these packets are misdirected, delayed, resequenced, or corrupted. Receiver-Based Techniques When error recovery occurs at the receiving end of the signal, it is receiver-based. These techniques focus on correcting corrupted or missing data. Waveform substitution Preliminary attempts at receiver-based error concealment involved packet repetition, replacing lost packets with copies of previously received packets. This function is computationally simple and is performed by a device on the receiver end called a ""drop-out compensator"". Zero Insertion When this technique is used, if a packet is lost, its entries are replaced with 0s. Interpolation Interpolation involves making educated guesses about the nature of a missing packet. For example, by following speech patterns in audio or faces in video. Buffer Data buffers are used for temporarily storing data while waiting for delayed packets to arrive. They are common in internet browser loading bars and video applications, like YouTube. Transmitter-Based Techniques Rather than attempting to recover lost packets, other techniques involve anticipating data loss, manipulating the data prior to transmission. Retransmission The simplest transmitter-based technique is retransmission, sending the message multiple times. Although this idea is simple, because of the extra time required to send multiple signals, this technique is incapable of supporting real-time applications. Packet Repetition Packet repetition, also called forward error correction (FEC), adds redundant data, which the receiver can use to recover lost packets. This minimizes loss, but increases the size of the packet. Interleaving Interleaving involves scrambling the data before transmi" https://en.wikipedia.org/wiki/List%20of%20differential%20geometry%20topics,"This is a list of differential geometry topics. See also glossary of differential and metric geometry and list of Lie group topics. Differential geometry of curves and surfaces Differential geometry of curves List of curves topics Frenet–Serret formulas Curves in differential geometry Line element Curvature Radius of curvature Osculating circle Curve Fenchel's theorem Differential geometry of surfaces Theorema egregium Gauss–Bonnet theorem First fundamental form Second fundamental form Gauss–Codazzi–Mainardi equations Dupin indicatrix Asymptotic curve Curvature Principal curvatures Mean curvature Gauss curvature Elliptic point Types of surfaces Minimal surface Ruled surface Conical surface Developable surface Nadirashvili surface Foundations Calculus on manifolds See also multivariable calculus, list of multivariable calculus topics Manifold Differentiable manifold Smooth manifold Banach manifold Fréchet manifold Tensor analysis Tangent vector Tangent space Tangent bundle Cotangent space Cotangent bundle Tensor Tensor bundle Vector field Tensor field Differential form Exterior derivative Lie derivative pullback (differential geometry) pushforward (differential) jet (mathematics) Contact (mathematics) jet bundle Frobenius theorem (differential topology) Integral curve Differential topology Diffeomorphism Large diffeomorphism Orientability characteristic class Chern class Pontrjagin class spin structure differentiable map submersion immersion Embedding Whitney embedding theorem Critical value Sard's theorem Saddle point Morse theory Lie derivative Hairy ball theorem Poincaré–Hopf theorem Stokes' theorem De Rham cohomology Sphere eversion Frobenius theorem (differential topology) Distribution (differential geometry) integral curve foliation integrability conditions for differential systems Fiber bundles Fiber bundle Principal bundle Frame bundle Hopf bundle Associated bundle Vector bundle Tangent bundle Cotangent bundle Line bundle Jet bundle Fundamental st" https://en.wikipedia.org/wiki/Hierarchical%20internetworking%20model,"The Hierarchical internetworking model is a three-layer model for network design first proposed by Cisco. It divides enterprise networks into three layers: core, distribution, and access layer. Access layer End-stations and servers connect to the enterprise at the access layer. Access layer devices are usually commodity switching platforms, and may or may not provide layer 3 switching services. The traditional focus at the access layer is minimizing ""cost-per-port"": the amount of investment the enterprise must make for each provisioned Ethernet port. This layer is also called the desktop layer because it focuses on connecting client nodes, such as workstations to the network. Distribution layer The distribution layer is the smart layer in the three-layer model. Routing, filtering, and QoS policies are managed at the distribution layer. Distribution layer devices also often manage individual branch-office WAN connections. This layer is also called the Workgroup layer. Core layer The core is the backbone of a network, where the internet(internetwork) gateway are located. The core network provides high-speed, highly redundant forwarding services to move packets between distribution-layer devices in different regions of the network. Core switches and routers are usually the most powerful, in terms of raw forwarding power, in the enterprise; core network devices manage the highest-speed connections, such as 10 Gigabit Ethernet or 100 Gigabit Ethernet. See also Multi-tier architecture Service layer" https://en.wikipedia.org/wiki/Standard%20Test%20and%20Programming%20Language,"JAM / STAPL (""Standard Test and Programming Language"") is an Altera-developed standard for JTAG in-circuit programming of programmable logic devices which is defined by JEDEC standard JESD-71. STAPL defines a standard .jam file format which supports in-system programmability or configuration of programmable devices. A JTAG device programmer implements a JAM player which reads the file as a set of instructions directing it to program a PLD. The standard is supported by multiple PLD and device programmer manufacturers." https://en.wikipedia.org/wiki/Detection%20theory,"Detection theory or signal detection theory is a means to measure the ability to differentiate between information-bearing patterns (called stimulus in living organisms, signal in machines) and random patterns that distract from the information (called noise, consisting of background stimuli and random activity of the detection machine and of the nervous system of the operator). In the field of electronics, signal recovery is the separation of such patterns from a disguising background. According to the theory, there are a number of determiners of how a detecting system will detect a signal, and where its threshold levels will be. The theory can explain how changing the threshold will affect the ability to discern, often exposing how adapted the system is to the task, purpose or goal at which it is aimed. When the detecting system is a human being, characteristics such as experience, expectations, physiological state (e.g., fatigue) and other factors can affect the threshold applied. For instance, a sentry in wartime might be likely to detect fainter stimuli than the same sentry in peacetime due to a lower criterion, however they might also be more likely to treat innocuous stimuli as a threat. Much of the early work in detection theory was done by radar researchers. By 1954, the theory was fully developed on the theoretical side as described by Peterson, Birdsall and Fox and the foundation for the psychological theory was made by Wilson P. Tanner, David M. Green, and John A. Swets, also in 1954. Detection theory was used in 1966 by John A. Swets and David M. Green for psychophysics. Green and Swets criticized the traditional methods of psychophysics for their inability to discriminate between the real sensitivity of subjects and their (potential) response biases. Detection theory has applications in many fields such as diagnostics of any kind, quality control, telecommunications, and psychology. The concept is similar to the signal-to-noise ratio used in the" https://en.wikipedia.org/wiki/Global%20Digital%20Mathematics%20Library,"The Global Digital Mathematics Library (GDML) is a project organized under the auspices of the International Mathematical Union (IMU) to establish a digital library focused on mathematics. A working group was convened in September 2014, following the 2014 International Congress of Mathematicians, by former IMU President Ingrid Daubechies and Chair Peter J. Olver of the IMU’s Committee on Electronic Information and Communication (CEIC). Currently the working group has eight members, namely: Thierry Bouche, Institut Fourier & Cellule MathDoc, Grenoble, France Bruno Buchberger, RISC, Hagenberg/Linz, Austria Patrick Ion, Mathematical Reviews/AMS, Ann Arbor, MI, US Michael Kohlhase, Jacobs University, Bremen, Germany Jim Pitman, University of California, Berkeley, CA, US Olaf Teschke, zbMATH/FIZ, Berlin, Germany Stephen M. Watt, University of Waterloo, Waterloo, ON, Canada Eric Weisstein, Wolfram Research, McAllen, TX, US Background In the spring of 2014, the Committee on Planning a Global Library of the Mathematical Sciences released a comprehensive study entitled “Developing a 21st Century Global Library for Mathematics Research.” This report states in its Strategic Plan section, “There is a compelling argument that through a combination of machine learning methods and editorial effort by both paid and volunteer editors, a significant portion of the information and knowledge in the global mathematical corpus could be made available to researchers as linked open data through the GDML."" Workshop A workshop titled ""Semantic Representation of Mathematical Knowledge"" was held at the Fields Institute in Toronto during February 3–5, 2016. The goal of the workshop was to lay down the foundations of a prototype semantic representation language for the GDML. The workshop's organizers recognized that the extremely wide scope of mathematics as a whole made it unrealistic to map out the detailed concepts, structures, and operations needed and used in individual mathema" https://en.wikipedia.org/wiki/University%20of%20Chicago%20School%20Mathematics%20Project,"The University of Chicago School Mathematics Project (UCSMP) is a multi-faceted project of the University of Chicago in the United States, intended to improve competency in mathematics in the United States by elevating educational standards for children in elementary and secondary schools. Overview The UCSMP supports educators by supplying training materials to them and offering a comprehensive mathematics curriculum at all levels of primary and secondary education. It seeks to bring international strengths into the United States, translating non-English math textbooks for English students and sponsoring international conferences on the subject of math education. Launched in 1983 with the aid of a six-year grant from Amoco, the UCSMP is used throughout the United States. UCSMP developed Everyday Mathematics, a pre-K and elementary school mathematics curriculum. UCSMP publishers Wright Group-McGraw-Hill (K-6 Materials) Wright Group-McGraw-Hill (6-12 Materials) American Mathematical Society (Translations of Foreign Texts) See also Zalman Usiskin" https://en.wikipedia.org/wiki/Compilospecies,"A compilospecies is a genetically aggressive species which acquires the heredities of a closely related sympatric species by means of hybridisation and comprehensive introgression. The target species may be incorporated to the point of despeciation, rendering it extinct. This type of genetic aggression is associated with species in newly disturbed habitats (such as pioneering species), weed species and domestication. They can be diploid or polyploid, as well as sexual or primarily asexual. The term compilospecies derives from the Latin word compilo, which means to seize, to collect, to rob or to plunder. A proposed explanation for the existence of such a species with weak reproductive barriers and frequent introgression is that it allows for genetic variation. An increase in the gene pool through viable hybrids can facilitate new phenotypes and the colonisation of novel habitats. The concept of compilospecies is not frequent in scientific literature and may not be fully regarded by the biological community as a true evolutionary concept, especially due to low supporting evidence. History Compilospecies were first described by Harlan and de Wet in 1962, who examined a wide range of grasses and other species such as Bothriochloa intermedia, otherwise known as Australian bluestem grass. B. intermedia was found to introgress heavily with neighboring sympatric grass species and even genera, particularly in geographically restricted areas. The species itself is of hybrid origin, containing genetic material from five or more different grass species. Harlan and de Wet examined the interactions between the genera Bothriochloa, Dichanthium and Capillipedium - an apomictic complex of grasses from the tribe Andropogoneae - and used the cytogenetic model of these as a basis for the compilospecies concept. Species within these genera exhibit both sexual and asexual reproduction, high heterozygosity, ploidies from 2x to 6x, and gene flow between bordering populations as evid" https://en.wikipedia.org/wiki/List%20of%20superlative%20trees,"The world's superlative trees can be ranked by any factor. Records have been kept for trees with superlative height, trunk diameter or girth, canopy coverage, airspace volume, wood volume, estimated mass, and age. Tallest The heights of the tallest trees in the world have been the subject of considerable dispute and much exaggeration. Modern verified measurements with laser rangefinders or with tape drop measurements made by tree climbers (such as those carried out by canopy researchers), have shown that some older tree height measurement methods are often unreliable, sometimes producing exaggerations of 5% to 15% or more above the real height. Historical claims of trees growing to , and even , are now largely disregarded as unreliable, and attributed to human error. The following are the tallest reliably measured specimens from the top 10 species. This table shows only currently standing specimens: Tallest historically Despite the high heights attained by trees nowadays, records exist of much greater heights in the past, before widespread logging took place. Some, if not most, of these records are without a doubt greatly exaggerated, but some have been reportedly measured with semi-reliable instruments when cut down and on the ground. Some of the heights recorded in this way exceed the maximum possible height of a tree as calculated by theorists, lending some limited credibility to speculation that some superlative trees are able to 'reverse' transpiration streams and absorb water through needles in foggy environments. All three of the tallest tree species continue to be Coast redwoods, Douglas fir and Giant mountain ash. Stoutest The girth of a tree is usually much easier to measure than the height, as it is a simple matter of stretching a tape round the trunk, and pulling it taut to find the circumference. Despite this, UK tree author Alan Mitchell made the following comment about measurements of yew trees: As a general standard, tree girth is taken at ""b" https://en.wikipedia.org/wiki/Alert%20correlation,"Alert correlation is a type of log analysis. It focuses on the process of clustering alerts (events), generated by NIDS and HIDS computer systems, to form higher-level pieces of information. Example of simple alert correlation is grouping invalid login attempts to report single incident like ""10000 invalid login attempts on host X"". See also ACARM ACARM-ng OSSIM Prelude Hybrid IDS Snort Computer systems Computer-aided engineering" https://en.wikipedia.org/wiki/Proof%20by%20intimidation,"Proof by intimidation (or argumentum verbosum) is a jocular phrase used mainly in mathematics to refer to a specific form of hand-waving, whereby one attempts to advance an argument by marking it as obvious or trivial, or by giving an argument loaded with jargon and obscure results. It attempts to intimidate the audience into simply accepting the result without evidence, by appealing to their ignorance and lack of understanding. The phrase is often used when the author is an authority in their field, presenting their proof to people who respect a priori the author's insistence of the validity of the proof, while in other cases, the author might simply claim that their statement is true because it is trivial or because they say so. Usage of this phrase is for the most part in good humour, though it can also appear in serious criticism. A proof by intimidation is often associated with phrases such as: ""Clearly..."" ""It is self-evident that..."" ""It can be easily shown that..."" ""... does not warrant a proof."" ""The proof is left as an exercise for the reader."" Outside mathematics, ""proof by intimidation"" is also cited by critics of junk science, to describe cases in which scientific evidence is thrown aside in favour of dubious arguments—such as those presented to the public by articulate advocates who pose as experts in their field. Proof by intimidation may also back valid assertions. Ronald A. Fisher claimed in the book credited with the new evolutionary synthesis, ""...by the analogy of compound interest the present value of the future offspring of persons aged x is easily seen to be..."", thence presenting a novel integral-laden definition of reproductive value. At this, Hal Caswell remarked, ""With all due respect to Fisher, I have yet to meet anyone who finds this equation 'easily seen.'"" Valid proofs were provided by subsequent researchers such as Leo A. Goodman (1968). In a memoir, Gian-Carlo Rota claimed that the expression ""proof by intimidation"" was coi" https://en.wikipedia.org/wiki/Ap%C3%A9ry%27s%20constant,"In mathematics, Apéry's constant is the sum of the reciprocals of the positive cubes. That is, it is defined as the number where is the Riemann zeta function. It has an approximate value of . The constant is named after Roger Apéry. It arises naturally in a number of physical problems, including in the second- and third-order terms of the electron's gyromagnetic ratio using quantum electrodynamics. It also arises in the analysis of random minimum spanning trees and in conjunction with the gamma function when solving certain integrals involving exponential functions in a quotient, which appear occasionally in physics, for instance, when evaluating the two-dimensional case of the Debye model and the Stefan–Boltzmann law. Irrational number was named Apéry's constant after the French mathematician Roger Apéry, who proved in 1978 that it is an irrational number. This result is known as Apéry's theorem. The original proof is complex and hard to grasp, and simpler proofs were found later. Beukers's simplified irrationality proof involves approximating the integrand of the known triple integral for , by the Legendre polynomials. In particular, van der Poorten's article chronicles this approach by noting that where , are the Legendre polynomials, and the subsequences are integers or almost integers. It is still not known whether Apéry's constant is transcendental. Series representations Classical In addition to the fundamental series: Leonhard Euler gave the series representation: in 1772, which was subsequently rediscovered several times. Fast convergence Since the 19th century, a number of mathematicians have found convergence acceleration series for calculating decimal places of . Since the 1990s, this search has focused on computationally efficient series with fast convergence rates (see section ""Known digits""). The following series representation was found by A. A. Markov in 1890, rediscovered by Hjortnaes in 1953, and rediscovered once more" https://en.wikipedia.org/wiki/No-analog%20%28ecology%29,"In paleoecology and ecological forecasting, a no-analog community or climate is one that is compositionally different from a (typically modern) baseline for measurement. Alternative naming conventions to describe no-analog communities and climates may include novel, emerging, mosaic, disharmonious and intermingled. Modern climates, communities and ecosystems are often studied in an attempt to understand no-analogs that have happened in the past and those that may occur in the future. This use of a modern analog to study the past draws on the concept of uniformitarianism. Along with the use of these modern analogs, actualistic studies and taphonomy are additional tools that are used in understanding no-analogs. Statistical tools are also used to identify no-analogs and their baselines, often through the use of dissimilarity analyses or analog matching Study of no-analog fossil remains are often carefully evaluated as to rule out mixing of fossils in an assemblage due to erosion, animal activity or other processes. No-analog climates Conditions that are considered no-analog climates are those that have no modern analog, such as the climate during the last glaciation. Glacial climates varied from current climates in seasonality and temperature, having an overall more steady climate without as many extreme temperatures as today's climate. Climates with no modern analog may be used to infer species range shifts, biodiversity changes, ecosystem arrangements and help in understanding species fundamental niche space. Past climates are often studied to understand how changes in a species' fundamental niche may lead to the formation of no analog communities. Seasonality and temperatures that are outside the climates at present provide opportunity for no-analog communities to arise, as is seen in the late Holocene plant communities. Evidence of deglacial temperature controls having significant effects on the formation of no-analog communities in the midwestern United State" https://en.wikipedia.org/wiki/Devicetree,"In computing, a devicetree (also written device tree) is a data structure describing the hardware components of a particular computer so that the operating system's kernel can use and manage those components, including the CPU or CPUs, the memory, the buses and the integrated peripherals. The device tree was derived from SPARC-based computers via the Open Firmware project. The current Devicetree specification is targeted at smaller systems, but is still used with some server-class systems (for instance, those described by the Power Architecture Platform Reference). Personal computers with the x86 architecture generally do not use device trees, relying instead on various auto configuration protocols (e.g. ACPI) to discover hardware. Systems which use device trees usually pass a static device tree (perhaps stored in EEPROM, or stored in NAND device like eUFS) to the operating system, but can also generate a device tree in the early stages of booting. As an example, Das U-Boot and kexec can pass a device tree when launching a new operating system. On systems with a boot loader that does not support device trees, a static device tree may be installed along with the operating system; the Linux kernel supports this approach. The Devicetree specification is currently managed by a community named devicetree.org, which is associated with, among others, Linaro and Arm. Formats A device tree can hold any kind of data as internally it is a tree of named nodes and properties. Nodes contain properties and child nodes, while properties are name–value pairs. Device trees have both a binary format for operating systems to use and a textual format for convenient editing and management. Usage Linux Given the correct device tree, the same compiled kernel can support different hardware configurations within a wider architecture family. The Linux kernel for the ARC, ARM, C6x, H8/300, MicroBlaze, MIPS, NDS32, Nios II, OpenRISC, PowerPC, RISC-V, SuperH, and Xtensa architectures rea" https://en.wikipedia.org/wiki/De%20Bruijn%E2%80%93Newman%20constant,"The de Bruijn–Newman constant, denoted by Λ and named after Nicolaas Govert de Bruijn and Charles Michael Newman, is a mathematical constant defined via the zeros of a certain function H(λ,z), where λ is a real parameter and z is a complex variable. More precisely, , where is the super-exponentially decaying function and Λ is the unique real number with the property that H has only real zeros if and only if λ≥Λ. The constant is closely connected with Riemann's hypothesis concerning the zeros of the Riemann zeta-function: since the Riemann hypothesis is equivalent to the claim that all the zeroes of H(0, z) are real, the Riemann hypothesis is equivalent to the conjecture that Λ≤0. Brad Rodgers and Terence Tao proved that Λ<0 cannot be true, so Riemann's hypothesis is equivalent to Λ = 0. A simplified proof of the Rodgers–Tao result was later given by Alexander Dobner. History De Bruijn showed in 1950 that H has only real zeros if λ ≥ 1/2, and moreover, that if H has only real zeros for some λ, H also has only real zeros if λ is replaced by any larger value. Newman proved in 1976 the existence of a constant Λ for which the ""if and only if"" claim holds; and this then implies that Λ is unique. Newman also conjectured that Λ ≥ 0, which was then proven by Brad Rodgers and Terence Tao in 2018. Upper bounds De Bruijn's upper bound of was not improved until 2008, when Ki, Kim and Lee proved , making the inequality strict. In December 2018, the 15th Polymath project improved the bound to . A manuscript of the Polymath work was submitted to arXiv in late April 2019, and was published in the journal Research In the Mathematical Sciences in August 2019. This bound was further slightly improved in April 2020 by Platt and Trudgian to . Historical bounds" https://en.wikipedia.org/wiki/Computer%20science%20and%20engineering,"Computer science and engineering (CSE) is an academic program at many universities which comprises computer science classes (e.g. data structures and algorithms) and computer engineering classes (e.g computer architecture). There is no clear division in computing between science and engineering, just like in the field of materials science and engineering. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs. It is offered in both undergraduate as well postgraduate with specializations. Academic courses Academic programs vary between colleges, but typically include a combination of topics in computer science, computer engineering, and electrical engineering. Undergraduate courses usually include programming, algorithms and data structures, computer architecture, operating systems, computer networks, parallel computing, embedded systems, algorithms design, circuit analysis and electronics, digital logic and processor design, computer graphics, scientific computing, software engineering, database systems, digital signal processing, virtualization, computer simulations and games programming. CSE programs also include core subjects of theoretical computer science such as theory of computation, numerical methods, machine learning, programming theory and paradigms. Modern academic programs also cover emerging computing fields like image processing, data science, robotics, bio-inspired computing, computational biology, autonomic computing and artificial intelligence. Most CSE programs require introductory mathematical knowledge, hence the first year of study is dominated by mathematical courses, primarily discrete mathematics, mathematical analysis, linear algebra, probability, and statistics, as well as the basics of electrical and electronic engineering, physics, and electromagnetism. Example universities with CSE majors and departments APJ Abdul Kalam Technological University American International University-B" https://en.wikipedia.org/wiki/Eventually%20%28mathematics%29,"In the mathematical areas of number theory and analysis, an infinite sequence or a function is said to eventually have a certain property, if it doesn't have the said property across all its ordered instances, but will after some instances have passed. The use of the term ""eventually"" can be often rephrased as ""for sufficiently large numbers"", and can be also extended to the class of properties that apply to elements of any ordered set (such as sequences and subsets of ). Notation The general form where the phrase eventually (or sufficiently large) is found appears as follows: is eventually true for ( is true for sufficiently large ), where and are the universal and existential quantifiers, which is actually a shorthand for: such that is true or somewhat more formally: This does not necessarily mean that any particular value for is known, but only that such an exists. The phrase ""sufficiently large"" should not be confused with the phrases ""arbitrarily large"" or ""infinitely large"". For more, see Arbitrarily large#Arbitrarily large vs. sufficiently large vs. infinitely large. Motivation and definition For an infinite sequence, one is often more interested in the long-term behaviors of the sequence than the behaviors it exhibits early on. In which case, one way to formally capture this concept is to say that the sequence possesses a certain property eventually, or equivalently, that the property is satisfied by one of its subsequences , for some . For example, the definition of a sequence of real numbers converging to some limit is: For each positive number , there exists a natural number such that for all , . When the term ""eventually"" is used as a shorthand for ""there exists a natural number such that for all "", the convergence definition can be restated more simply as: For each positive number , eventually . Here, notice that the set of natural numbers that do not satisfy this property is a finite set; that is, the set is empty or has " https://en.wikipedia.org/wiki/CEN/XFS,"CEN/XFS or XFS (extensions for financial services) provides a client-server architecture for financial applications on the Microsoft Windows platform, especially peripheral devices such as EFTPOS terminals and ATMs which are unique to the financial industry. It is an international standard promoted by the European Committee for Standardization (known by the acronym CEN, hence CEN/XFS). The standard is based on the WOSA Extensions for Financial Services or WOSA/XFS developed by Microsoft. With the move to a more standardized software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. XFS provides a common API for accessing and manipulating various financial services devices regardless of the manufacturer. History Chronology: 1991 - Microsoft forms ""Banking Solutions Vendor Council"" 1995 - WOSA/XFS 1.11 released 1997 - WOSA/XFS 2.0 released - additional support for 24 hours-a-day unattended operation 1998 - adopted by European Committee for Standardization as an international standard. 2000 - XFS 3.0 released by CEN 2008 - XFS 3.10 released by CEN 2011 - XFS 3.20 released by CEN 2015 - XFS 3.30 released by CEN 2020 - XFS 3.40 released by CEN WOSA/XFS changed name to simply XFS when the standard was adopted by the international CEN/ISSS standards body. However, it is most commonly called CEN/XFS by the industry participants. XFS middleware While the perceived benefit of XFS is similar to Java's ""write once, run anywhere"" mantra, often different hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that applications typically use a middleware to even out the differences between various platforms implementation of XFS. Notable XFS middleware platforms include: F1 Solutions - F1 TPS (multi-vendor ATM & POS solution) Serquo - Dwide (REST API middleware for XFS) Nexus Software LLC - Nexu" https://en.wikipedia.org/wiki/Human%E2%80%93robot%20interaction,"Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, and psychology. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems. Origins Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing, many aspects of HRI are continuations of human communications, a field of research which is much older than robotics. The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot. Asimov coined Three Laws of Robotics, namely: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator. Although initially robots in the human–robot " https://en.wikipedia.org/wiki/Pulse%20compression,"Pulse compression is a signal processing technique commonly used by radar, sonar and echography to either increase the range resolution when pulse length is constrained or increase the signal to noise ratio when the peak power and the bandwidth (or equivalently range resolution) of the transmitted signal are constrained. This is achieved by modulating the transmitted pulse and then correlating the received signal with the transmitted pulse. Simple pulse Signal description The ideal model for the simplest, and historically first type of signals a pulse radar or sonar can transmit is a truncated sinusoidal pulse (also called a CW --carrier wave-- pulse), of amplitude and carrier frequency, , truncated by a rectangular function of width, . The pulse is transmitted periodically, but that is not the main topic of this article; we will consider only a single pulse, . If we assume the pulse to start at time , the signal can be written the following way, using the complex notation: Range resolution Let us determine the range resolution which can be obtained with such a signal. The return signal, written , is an attenuated and time-shifted copy of the original transmitted signal (in reality, Doppler effect can play a role too, but this is not important here.) There is also noise in the incoming signal, both on the imaginary and the real channel. The noise is assumed to be band-limited, that is to have frequencies only in (this generally holds in reality, where a bandpass filter is generally used as one of the first stages in the reception chain); we write to denote that noise. To detect the incoming signal, a matched filter is commonly used. This method is optimal when a known signal is to be detected among additive noise having a normal distribution. In other words, the cross-correlation of the received signal with the transmitted signal is computed. This is achieved by convolving the incoming signal with a conjugated and time-reversed version of the transmitted " https://en.wikipedia.org/wiki/Dextrose%20equivalent,"Dextrose equivalent (DE) is a measure of the amount of reducing sugars present in a sugar product, expressed as a percentage on a dry basis relative to dextrose. The dextrose equivalent gives an indication of the average degree of polymerisation (DP) for starch sugars. As a rule of thumb, DE × DP = 120. In all glucose polymers, from the native starch to glucose syrup, the molecular chain begins with a reducing sugar, containing a free aldehyde. As the starch is hydrolysed, the molecules become shorter and more reducing sugars are present. Therefore, the dextrose equivalent describes the degree of conversion of starch to dextrose. The standard method of determining the dextrose equivalent is the Lane-Eynon titration, based on the reduction of copper(II) sulfate in an alkaline tartrate solution, an application of Fehling's test. Examples: A maltodextrin with a DE of 10 would have 10% of the reducing power of dextrose which has a DE of 100. Maltose, a disaccharide made of two glucose (dextrose) molecules, has a DE of 52, correcting for the water loss in molecular weight when the two molecules are combined. Glucose (dextrose) has a molecular mass of 180, while water has a molecular mass of 18. For each 2 glucose monomers binding, a water molecule is removed. Therefore, the molecular mass of a glucose polymer can be calculated by using the formula (180*n - 18*(n-1)) with n the DP (degree of polymerisation) of the glucose polymer. The DE can be calculated as 100*(180 / Molecular mass( glucose polymer)). In this example the DE is calculated as 100*(180/(180*2-18*1)) = 52. Sucrose actually has a DE of zero even though it is a disaccharide, because both reducing groups of the monosaccharides that make it are connected, so there are no remaining reducing groups. Because different reducing sugars (e.g. fructose and glucose) have different sweetness, it is incorrect to assume that there is any direct relationship between dextrose equivalent and sweetness." https://en.wikipedia.org/wiki/Table%20of%20Lie%20groups,"This article gives a table of some common Lie groups and their associated Lie algebras. The following are noted: the topological properties of the group (dimension; connectedness; compactness; the nature of the fundamental group; and whether or not they are simply connected) as well as on their algebraic properties (abelian; simple; semisimple). For more examples of Lie groups and other related topics see the list of simple Lie groups; the Bianchi classification of groups of up to three dimensions; see classification of low-dimensional real Lie algebras for up to four dimensions; and the list of Lie group topics. Real Lie groups and their algebras Column legend Cpt: Is this group G compact? (Yes or No) : Gives the group of components of G. The order of the component group gives the number of connected components. The group is connected if and only if the component group is trivial (denoted by 0). : Gives the fundamental group of G whenever G is connected. The group is simply connected if and only if the fundamental group is trivial (denoted by 0). UC: If G is not simply connected, gives the universal cover of G. Real Lie algebras Complex Lie groups and their algebras Note that a ""complex Lie group"" is defined as a complex analytic manifold that is also a group whose multiplication and inversion are each given by a holomorphic map. The dimensions in the table below are dimensions over C. Note that every complex Lie group/algebra can also be viewed as a real Lie group/algebra of twice the dimension. Complex Lie algebras The dimensions given are dimensions over C. Note that every complex Lie algebra can also be viewed as a real Lie algebra of twice the dimension. The Lie algebra of affine transformations of dimension two, in fact, exist for any field. An instance has already been listed in the first table for real Lie algebras. See also Classification of low-dimensional real Lie algebras Simple Lie group#Full classification" https://en.wikipedia.org/wiki/Compositional%20domain,"A compositional domain in genetics is a region of DNA with a distinct guanine (G) and cytosine (C) G-C and C-G content (collectively GC content). The homogeneity of compositional domains is compared to that of the chromosome on which they reside. As such, compositional domains can be homogeneous or nonhomogeneous domains. Compositionally homogeneous domains that are sufficiently long (= 300 kb) are termed isochores or isochoric domains. The compositional domain model was proposed as an alternative to the isochoric model. The isochore model was proposed by Bernardi and colleagues to explain the observed non-uniformity of genomic fragments in the genome. However, recent sequencing of complete genomic data refuted the isochoric model. Its main predictions were: GC content of the third codon position (GC3) of protein coding genes is correlated with the GC content of the isochores embedding the corresponding genes. This prediction was found to be incorrect. GC3 could not predict the GC content of nearby sequences. The genome organization of warm-blooded vertebrates is a mosaic of isochores. This prediction was rejected by many studies that used the complete human genome data. The genome organization of cold-blooded vertebrates is characterized by low GC content levels and lower compositional heterogeneity. This prediction was disproved by finding high and low GC content domains in fish genomes. The compositional domain model describes the genome as a mosaic of short and long homogeneous and nonhomogeneous domains. The composition and organization of the domains were shaped by different evolutionary processes that either fused or broke down the domains. This genomic organization model was confirmed in many new genomic studies of cow, honeybee, sea urchin, body louse, Nasonia, beetle, and ant genomes. The human genome was described as consisting of a mixture of compositionally nonhomogeneous domains with numerous short compositionally homogeneous domains and relativ" https://en.wikipedia.org/wiki/List%20of%20polygons,"In geometry, a polygon is traditionally a plane figure that is bounded by a finite chain of straight line segments closing in a loop to form a closed chain. These segments are called its edges or sides, and the points where two of the edges meet are the polygon's vertices (singular: vertex) or corners. The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning ""many-angled"". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions, although the regular forms trigon, tetragon, and enneagon are sometimes encountered as well. Greek numbers Polygons are primarily named by prefixes from Ancient Greek numbers. Systematic polygon names To construct the name of a polygon with more than 20 and fewer than 100 edges, combine the prefixes as follows. The ""kai"" connector is not included by some authors. Extending the system up to 999 is expressed with these prefixes; the names over 99 no longer correspond to how they are actually expressed in Greek. List of n-gons by Greek numerical prefixes See also Platonic solid Dice List of polygons, polyhedra and polytopes Circle Ellipses Shapes" https://en.wikipedia.org/wiki/List%20of%20International%20Mathematical%20Olympiads,"The first of the International Mathematical Olympiads (IMOs) was held in Romania in 1959. The oldest of the International Science Olympiads, the IMO has since been held annually, except in 1980. That year, the competition initially planned to be held in Mongolia was cancelled due to the Soviet invasion of Afghanistan. Because the competition was initially founded for Eastern European countries participating in the Warsaw Pact, under the influence of the Eastern Bloc, the earlier IMOs were hosted only in Eastern European countries, gradually spreading to other nations. Sources differ about the cities hosting some of the early IMOs and the exact dates when they took place. The first IMO was held in Romania in 1959. Seven countries entered – Bulgaria, Czechoslovakia, East Germany, Hungary, Poland, Romania and the Soviet Union – with the hosts finishing as the top-ranked nation. The number of participating countries has since risen: 14 countries took part in 1969, 50 in 1989, and 104 in 2009. North Korea is the only country to have been caught cheating, resulting in its disqualification at the 32nd IMO in 1991 and the 51st IMO in 2010. In January 2011, Google gave €1 million to the IMO organization to help cover the costs of the events from 2011 to 2015. List of Olympiads See also Asian Pacific Mathematics Olympiad Provincial Mathematical Olympiad List of mathematics competitions List of International Mathematical Olympiad participants Notes" https://en.wikipedia.org/wiki/Existence%20theorem,"In mathematics, an existence theorem is a theorem which asserts the existence of a certain object. It might be a statement which begins with the phrase ""there exist(s)"", or it might be a universal statement whose last quantifier is existential (e.g., ""for all , , ... there exist(s) ...""). In the formal terms of symbolic logic, an existence theorem is a theorem with a prenex normal form involving the existential quantifier, even though in practice, such theorems are usually stated in standard mathematical language. For example, the statement that the sine function is continuous everywhere, or any theorem written in big O notation, can be considered as theorems which are existential by nature—since the quantification can be found in the definitions of the concepts used. A controversy that goes back to the early twentieth century concerns the issue of purely theoretic existence theorems, that is, theorems which depend on non-constructive foundational material such as the axiom of infinity, the axiom of choice or the law of excluded middle. Such theorems provide no indication as to how to construct (or exhibit) the object whose existence is being claimed. From a constructivist viewpoint, such approaches are not viable as it lends to mathematics losing its concrete applicability, while the opposing viewpoint is that abstract methods are far-reaching, in a way that numerical analysis cannot be. 'Pure' existence results In mathematics, an existence theorem is purely theoretical if the proof given for it does not indicate a construction of the object whose existence is asserted. Such a proof is non-constructive, since the whole approach may not lend itself to construction. In terms of algorithms, purely theoretical existence theorems bypass all algorithms for finding what is asserted to exist. These are to be contrasted with the so-called ""constructive"" existence theorems, which many constructivist mathematicians working in extended logics (such as intuitionistic logic) b" https://en.wikipedia.org/wiki/Standard%20Model,"The Standard Model of particle physics is the theory describing three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifying all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy. Although the Standard Model is believed to be theoretically self-consistent and has demonstrated some success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. The development of the Standard Model was driven by theoretical and experimental particle physicists alike. The Standard Model is a paradigm of a quantum field theory for theorists, exhibiting a wide range of phenomena, including spontaneous symmetry breaking, anomalies, and non-perturbative behavior. It is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the " https://en.wikipedia.org/wiki/SCSI%20RDMA%20Protocol,"In computing the SCSI RDMA Protocol (SRP) is a protocol that allows one computer to access SCSI devices attached to another computer via remote direct memory access (RDMA). The SRP protocol is also known as the SCSI Remote Protocol. The use of RDMA makes higher throughput and lower latency possible than what is generally available through e.g. the TCP/IP communication protocol. Though the SRP protocol has been designed to use RDMA networks efficiently, it is also possible to implement the SRP protocol over networks that do not support RDMA. History SRP was published as an ANSI standard (ANSI INCITS 365-2002) in 2002 and renewed in 2007 and 2019. Related Protocols As with the ISCSI Extensions for RDMA (iSER) communication protocol, there is the notion of a target (a system that stores the data) and an initiator (a client accessing the target) with the target initiating data transfers. In other words, when an initiator writes data to a target, the target executes an RDMA read to fetch the data from the initiator and when a user issues a SCSI read command, the target sends an RDMA write to the initiator. While the SRP protocol is easier to implement than the iSER protocol, iSER offers more management functionality, e.g. the target discovery infrastructure enabled by the iSCSI protocol. Performance Bandwidth and latency of storage targets supporting the SRP or the iSER protocol should be similar. On Linux, there are two SRP and two iSER storage target implementations available that run inside the kernel (SCST and LIO) and an iSER storage target implementation that runs in user space (STGT). Measurements have shown that the SCST SRP target has a lower latency and a higher bandwidth than the STGT iSER target. This is probably because the RDMA communication overhead is lower for a component implemented in the Linux kernel than for a user space Linux process, and not because of protocol differences. Implementations In order to use the SRP protocol, an SRP initiato" https://en.wikipedia.org/wiki/BrainChip,"BrainChip (ASX:BRN, OTCQX:BRCHF) is an Australia-based technology company, founded in 2004 by Peter Van Der Made, that specializes in developing advanced artificial intelligence (AI) and machine learning (ML) hardware. The company's primary products are the MetaTF development environment, which allows the training and deployment of spiking neural networks (SNN), and the AKD1000 neuromorphic processor, a hardware implementation of their spiking neural network system. BrainChip's technology is based on a neuromorphic computing architecture, which attempts to mimic the way the human brain works. The company is a part of Intel Foundry Services and Arm AI partnership. History Australian mining company Aziana acquired BrainChip in March 2015. Later, via a reverse merger of the now dormant Aziana in September 2015 BrainChip was put on the Australian Stock Exchange (ASX), and van der Made started commercializing his original idea for artificial intelligence processor hardware. In 2016, the company appointed former Exar CEO Louis Di Nardo as CEO; Van Der Made then took the position of CTO. In October 2021, the company announced that it was taking orders for its Akida AI Processor Development Kits, and in January 2022, that it was taking orders for its Akida AI Processor PCIe boards. In April 2022, BrainChip partnered with NVISO to provide collaboration with applications and technologies. In November 2022, BrainChip added the Rochester Institute of Technology to its University AI accelerator program. The next month, BrainChip was a part of Intel Foundry Services. In January 2023, Edge Impulse announced support for BrainChip's AKD processor. MetaTF The MetaTF software is designed to work with a variety of image, video, and sensor data, and is intended to be implemented in a range of applications, including security, surveillance, autonomous vehicles, and industrial automation. The software uses Python to create spiking neural networks (or convert other neural networks to " https://en.wikipedia.org/wiki/Google%20Silicon%20Initiative,"The Google Open Silicon Initiative is an initiative launched by the Google Hardware Toolchains team to democratize access to custom silicon design. Google has partnered with SkyWater Technology and GlobalFoundries to open-source their Process Design Kits for 180nm, 130nm and 90nm process. This initiative provides free software tools for chip designers to create, verify and test virtual chip circuit designs before they are physically produced in factories. The aim of the initiative is to reduce the cost of chip designs and production, which will benefit DIY enthusiasts, researchers, universities, and chip startups. The program has gained more partners, including the US Department of Defense, which injected $15 million in funding to SkyWater, one of the manufacturers supporting the program." https://en.wikipedia.org/wiki/Food%20quality,"Food quality is a concept often based on the organoleptic characteristics (e.g., taste, aroma, appearance) and nutritional value of food. Producers reducing potential pathogens and other hazards through food safety practices is another important factor in gauging standards. A food's origin, and even its branding, can play a role in how consumers perceive the quality of products. Sensory Consumer acceptability of foods is typically based upon flavor and texture, as well as its color and smell. Safety The International Organization for Standardization identifies requirements for a producer's food safety management system, including the processes and procedures a company must follow to control hazards and promote safe products, through ISO 22000. Federal and state level departments, specifically The Food and Drug Administration, are responsible for promoting public health by, among other things, ensuring food safety. Food quality in the United States is enforced by the Food Safety Act 1990. The European Food Safety Authority provides scientific advice and communicates on risks associated with the food chain on the continent. There are many existing international quality institutes testing food products in order to indicate to all consumers which are higher quality products. Founded in 1961 in Brussels, The international Monde Selection quality award is the oldest in evaluating food quality. The judgements are based on the following areas: taste, health, convenience, labelling, packaging, environmental friendliness and innovation. As many consumers rely on manufacturing and processing standards, the Institute Monde Selection takes into account the European Food Law. Food quality in the United States is enforced by the Food Safety Act 1990. Members of the public complain to trading standards professionals, [specify] who submit complaint samples and also samples used to routinely monitor the food marketplace to public analysts. Public analysts carry out scientific ana" https://en.wikipedia.org/wiki/Probability%20and%20statistics,"Probability and statistics are two closely related fields in mathematics, sometimes combined for academic purposes. They are covered in several articles: Probability Statistics Glossary of probability and statistics Notation in probability and statistics Timeline of probability and statistics" https://en.wikipedia.org/wiki/Polynomial%20Wigner%E2%80%93Ville%20distribution,"In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function. It was proposed by Boualem Boashash and Peter O'Shea in 1994. Introduction Many signals in nature and in engineering applications can be modeled as , where is a polynomial phase and . For example, it is important to detect signals of an arbitrary high-order polynomial phase. However, the conventional Wigner–Ville distribution have the limitation being based on the second-order statistics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase. Definition The polynomial Wigner–Ville distribution is defined as where denotes the Fourier transform with respect to , and is the polynomial kernel given by where is the input signal and is an even number. The above expression for the kernel may be rewritten in symmetric form as The discrete-time version of the polynomial Wigner–Ville distribution is given by the discrete Fourier transform of where and is the sampling frequency. The conventional Wigner–Ville distribution is a special case of the polynomial Wigner–Ville distribution with Example One of the simplest generalizations of the usual Wigner–Ville distribution kernel can be achieved by taking . The set of coefficients and must be found to completely specify the new kernel. For example, we set The resulting discrete-time kernel is then given by Design of a Practical Polynomial Kernel Given a signal , where is a polynomial function, its instantaneous frequency (IF) is . For a practical polynomial kernel , the set of coefficients and should be chosen properly such that When , When Applications Nonlinear FM signals are common both in nature and in engineering applications. For example, the sonar system of some bats use hyperbolic FM and quadratic FM signals for e" https://en.wikipedia.org/wiki/Viral%20eukaryogenesis,"Viral eukaryogenesis is the hypothesis that the cell nucleus of eukaryotic life forms evolved from a large DNA virus in a form of endosymbiosis within a methanogenic archaeon or a bacterium. The virus later evolved into the eukaryotic nucleus by acquiring genes from the host genome and eventually usurping its role. The hypothesis was first proposed by Philip Bell in 2001 and was further popularized with the discovery of large, complex DNA viruses (such as Mimivirus) that are capable of protein biosynthesis. Viral eukaryogenesis has been controversial for several reasons. For one, it is sometimes argued that the posited evidence for the viral origins of the nucleus can be conversely used to suggest the nuclear origins of some viruses. Secondly, this hypothesis has further inflamed the longstanding debate over whether viruses are living organisms. Hypothesis The viral eukaryogenesis hypothesis posits that eukaryotes are composed of three ancestral elements: a viral component that became the modern nucleus; a prokaryotic cell (an archaeon according to the eocyte hypothesis) which donated the cytoplasm and cell membrane of modern cells; and another prokaryotic cell (here bacterium) that, by endocytosis, became the modern mitochondrion or chloroplast. In 2006, researchers suggested that the transition from RNA to DNA genomes first occurred in the viral world. A DNA-based virus may have provided storage for an ancient host that had previously used RNA to store its genetic information (such host is called ribocell or ribocyte). Viruses may initially have adopted DNA as a way to resist RNA-degrading enzymes in the host cells. Hence, the contribution from such a new component may have been as significant as the contribution from chloroplasts or mitochondria. Following this hypothesis, archaea, bacteria, and eukaryotes each obtained their DNA informational system from a different virus. In the original paper it was also an RNA cell at the origin of eukaryotes, but eventu" https://en.wikipedia.org/wiki/Chronux,"Chronux is an open-source software package developed for the loading, visualization and analysis of a variety of modalities / formats of neurobiological time series data. Usage of this tool enables neuroscientists to perform a variety of analysis on multichannel electrophysiological data such as LFP (local field potentials), EEG, MEG, Neuronal spike times and also on spatiotemporal data such as FMRI and dynamic optical imaging data. The software consists of a set of MATLAB routines interfaced with C libraries that can be used to perform the tasks that constitute a typical study of neurobiological data. These include local regression and smoothing, spike sorting and spectral analysis - including multitaper spectral analysis, a powerful nonparametric method to estimate power spectrum. The package also includes some GUIs for time series visualization and analysis. Chronux is GNU GPL v2 licensed (and MATLAB is proprietary). The most recent version of Chronux is version 2.12. History From 1996 to 2001, the Marine Biological Laboratory (MBL) at Woods Hole, Massachusetts, USA hosted a workshop on the analysis of neural data. This workshop then evolved into the special topics course on neuroinformatics which is held at the MBL in the last two weeks of August every year. The popularity of these pedagogical efforts and the need for wider dissemination of sophisticated time-series analysis tools in the wider neuroscience community led the Mitra Lab at Cold Spring Harbor Laboratory to initiate an NIH funded effort to develop software tools for neural data analysis in the form of the Chronux package. Chronux is the result of efforts of a number of people, the chief among whom are Hemant Bokil, Peter Andrews, Samar Mehta, Ken Harris, Catherine Loader, Partha Mitra, Hiren Maniar, Ravi Shukla, Ramesh Yadav, Hariharan Nalatore and Sumanjit Kaur. Important contributions were also made by Murray Jarvis, Bijan Pesaran and S.Gopinath. Chronux welcome contributions from interested ind" https://en.wikipedia.org/wiki/Quantization%20%28signal%20processing%29,"Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and truncation are typical examples of quantization processes. Quantization is involved to some degree in nearly all digital signal processing, as the process of representing a signal in digital form ordinarily involves rounding. Quantization also forms the core of essentially all lossy compression algorithms. The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error. A device or algorithmic function that performs quantization is called a quantizer. An analog-to-digital converter is an example of a quantizer. Example For example, rounding a real number to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value can be expressed as , where the notation denotes the floor function. Alternatively, the same quantizer may be expressed in terms of the ceiling function, as . (The notation denotes the ceiling function). The essential property of a quantizer is having a countable-set of possible output-values members smaller than the set of possible input values. The members of the set of output values may have integer, rational, or real values. For simple rounding to the nearest integer, the step size is equal to 1. With or with equal to any other integer value, this quantizer has real-valued inputs and integer-valued outputs. When the quantization step size (Δ) is small relative to the variation in the signal being quantized, it is relatively simple to show that the mean squared error produced by such a rounding operation will be approximately . Mean squared error is also called the quantization noise power. Adding one bit to the quantizer ha" https://en.wikipedia.org/wiki/Magnetoreception,"Magnetoreception is a sense which allows an organism to detect the Earth's magnetic field. Animals with this sense include some arthropods, molluscs, and vertebrates (fish, amphibians, reptiles, birds, and mammals). The sense is mainly used for orientation and navigation, but it may help some animals to form regional maps. Experiments on migratory birds provide evidence that they make use of a cryptochrome protein in the eye, relying on the quantum radical pair mechanism to perceive magnetic fields. This effect is extremely sensitive to weak magnetic fields, and readily disturbed by radio-frequency interference, unlike a conventional iron compass. Birds have iron-containing materials in their upper beaks. There is some evidence that this provides a magnetic sense, mediated by the trigeminal nerve, but the mechanism is unknown. Cartilaginous fish including sharks and stingrays can detect small variations in electric potential with their electroreceptive organs, the ampullae of Lorenzini. These appear to be able to detect magnetic fields by induction. There is some evidence that these fish use magnetic fields in navigation. History Biologists have long wondered whether migrating animals such as birds and sea turtles have an inbuilt magnetic compass, enabling them to navigate using the Earth's magnetic field. Until late in the 20th century, evidence for this was essentially only behavioural: many experiments demonstrated that animals could indeed derive information from the magnetic field around them, but gave no indication of the mechanism. In 1972, Roswitha and Wolfgang Wiltschko showed that migratory birds responded to the direction and inclination (dip) of the magnetic field. In 1977, M. M. Walker and colleagues identified iron-based (magnetite) magnetoreceptors in the snouts of rainbow trout. In 2003, G. Fleissner and colleagues found iron-based receptors in the upper beaks of homing pigeons, both seemingly connected to the animal's trigeminal nerve. Resear" https://en.wikipedia.org/wiki/Audio%20signal%20processing,"Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for p" https://en.wikipedia.org/wiki/InfinityDB,"InfinityDB is an all-Java embedded database engine and client/server DBMS with an extended java.util.concurrent.ConcurrentNavigableMap interface (a subinterface of java.util.Map) that is deployed in handheld devices, on servers, on workstations, and in distributed settings. The design is based on a proprietary lockless, concurrent, B-tree architecture that enables client programmers to reach high levels of performance without risk of failures. A new Client/Server version 5.0 is in alpha testing, wrapping the established embedded version to provide shared access via a secure, remote server. In the embedded system, data is stored to and retrieved from a single embedded database file using the InfnityDB API that allows direct access to the variable length item spaces. Database client programmers can construct traditional relations as well as specialized models that directly satisfy the needs of the dependent application. There is no limit to the number of items, database size, or JVM size, so InfinityDB can function in both the smallest environment that provides random access storage and can be scaled to large settings. Traditional relations and specialized models can be directed to the same database file. InfinityDB can be optimized for standard relations as well as all other types of data, allowing client applications to perform at a minimum of one million operations per second on a virtual, 8-core system. AirConcurrentMap, is an in-memory map that implements the Java ConcurrentMap interface, but internally it uses a multi-core design so that its performance and memory make it the fastest Java Map when ordering is performed and it holds medium to large numbers of entries. AirConcurrentMap iteration is faster than any Java Map iterators, regardless of the specific map type. Map API InfinityDB can be accessed as an extended standard java.util.concurrent.ConcurrentNavigableMap, or via a low-level 'ItemSpace' API. The ConcurrentNavigableMap interface is a subinterf" https://en.wikipedia.org/wiki/Autotroph,"An autotroph is an organism that produces complex organic compounds (such as carbohydrates, fats, and proteins) using carbon from simple substances such as carbon dioxide, generally using energy from light (photosynthesis) or inorganic chemical reactions (chemosynthesis). They convert an abiotic source of energy (e.g. light) into energy stored in organic compounds, which can be used by other organisms (e.g. heterotrophs). Autotrophs do not need a living source of carbon or energy and are the producers in a food chain, such as plants on land or algae in water (in contrast to heterotrophs as consumers of autotrophs or other heterotrophs). Autotrophs can reduce carbon dioxide to make organic compounds for biosynthesis and as stored chemical fuel. Most autotrophs use water as the reducing agent, but some can use other hydrogen compounds such as hydrogen sulfide. The primary producers can convert the energy in the light (phototroph and photoautotroph) or the energy in inorganic chemical compounds (chemotrophs or chemolithotrophs) to build organic molecules, which is usually accumulated in the form of biomass and will be used as carbon and energy source by other organisms (e.g. heterotrophs and mixotrophs). The photoautotrophs are the main primary producers, converting the energy of the light into chemical energy through photosynthesis, ultimately building organic molecules from carbon dioxide, an inorganic carbon source. Examples of chemolithotrophs are some archaea and bacteria (unicellular organisms) that produce biomass from the oxidation of inorganic chemical compounds, these organisms are called chemoautotrophs, and are frequently found in hydrothermal vents in the deep ocean. Primary producers are at the lowest trophic level, and are the reasons why Earth sustains life to this day. Most chemoautotrophs are lithotrophs, using inorganic electron donors such as hydrogen sulfide, hydrogen gas, elemental sulfur, ammonium and ferrous oxide as reducing agents and hyd" https://en.wikipedia.org/wiki/Arbitrarily%20large,"In mathematics, the phrases arbitrarily large, arbitrarily small and arbitrarily long are used in statements to make clear of the fact that an object is large, small and long with little limitation or restraint, respectively. The use of ""arbitrarily"" often occurs in the context of real numbers (and its subsets thereof), though its meaning can differ from that of ""sufficiently"" and ""infinitely"". Examples The statement "" is non-negative for arbitrarily large ."" is a shorthand for: ""For every real number , is non-negative for some value of greater than ."" In the common parlance, the term ""arbitrarily long"" is often used in the context of sequence of numbers. For example, to say that there are ""arbitrarily long arithmetic progressions of prime numbers"" does not mean that there exists any infinitely long arithmetic progression of prime numbers (there is not), nor that there exists any particular arithmetic progression of prime numbers that is in some sense ""arbitrarily long"". Rather, the phrase is used to refer to the fact that no matter how large a number is, there exists some arithmetic progression of prime numbers of length at least . Similar to arbitrarily large, one can also define the phrase "" holds for arbitrarily small real numbers"", as follows: In other words: However small a number, there will be a number smaller than it such that holds. Arbitrarily large vs. sufficiently large vs. infinitely large While similar, ""arbitrarily large"" is not equivalent to ""sufficiently large"". For instance, while it is true that prime numbers can be arbitrarily large (since there are infinitely many of them due to Euclid's theorem), it is not true that all sufficiently large numbers are prime. As another example, the statement "" is non-negative for arbitrarily large ."" could be rewritten as: However, using ""sufficiently large"", the same phrase becomes: Furthermore, ""arbitrarily large"" also does not mean ""infinitely large"". For example, although prime number" https://en.wikipedia.org/wiki/Harmonic%20%28mathematics%29,"In mathematics, a number of concepts employ the word harmonic. The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians; the solutions to which are given by eigenvalues corresponding to their modes of vibration. Thus, the term ""harmonic"" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts. Mathematical terms whose names include ""harmonic"" include: Projective harmonic conjugate Cross-ratio Harmonic analysis Harmonic conjugate Harmonic form Harmonic function Harmonic mean Harmonic mode Harmonic number Harmonic series Alternating harmonic series Harmonic tremor Spherical harmonics Mathematical terminology Harmonic analysis" https://en.wikipedia.org/wiki/Integrated%20fluidic%20circuit,"Integrated fluidic circuit (IFC) is a type of integrated circuit based on fluidics. See also Microfluidics Biotechnology Fluid mechanics Integrated circuits" https://en.wikipedia.org/wiki/Stein%27s%20example,"In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955. An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. Formal statement The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let be a vector consisting of unknown parameters. To estimate these parameters, a single measurement is performed for each parameter , resulting in a vector of length . Suppose the measurements are known to be independent, Gaussian random variables, with mean and variance 1, i.e., . Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called ""ordinary"" decision rule can be written as , which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as . Surprisingly, it turns out that the ""ordinary"" decision rule is suboptimal (inadmissible) in terms of mean " https://en.wikipedia.org/wiki/Lists%20of%20physics%20equations,"In physics, there are equations in every field to relate physical quantities to each other and perform calculations. Entire handbooks of equations can only summarize most of the full subject, else are highly specialized within a certain field. Physics is derived of formulae only. General scope Variables commonly used in physics Continuity equation Constitutive equation Specific scope Defining equation (physical chemistry) List of equations in classical mechanics Table of thermodynamic equations List of equations in wave theory List of relativistic equations List of equations in fluid mechanics List of electromagnetism equations List of equations in gravitation List of photonics equations List of equations in quantum mechanics List of equations in nuclear and particle physics See also List of equations Operator (physics) Laws of science Units and nomenclature Physical constant Physical quantity SI units SI derived unit SI electromagnetism units List of common physics notations" https://en.wikipedia.org/wiki/Node%20%28physics%29,"A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an anti-node, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes. Explanation Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string. In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other. In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio (SWR), the ratio of the amplitude of the wave at the antinode to the amplitude at the node. In resonance of a two dimensional surface or membrane, such as a drumhead or vibrating metal plate, the nodes become nodal lines, lines on the surface where the surface is " https://en.wikipedia.org/wiki/Heartbeat%20%28computing%29,"In computer science, a heartbeat is a periodic signal generated by hardware or software to indicate normal operation or to synchronize other parts of a computer system. Heartbeat mechanism is one of the common techniques in mission critical systems for providing high availability and fault tolerance of network services by detecting the network or systems failures of nodes or daemons which belongs to a network cluster—administered by a master server—for the purpose of automatic adaptation and rebalancing of the system by using the remaining redundant nodes on the cluster to take over the load of failed nodes for providing constant services. Usually a heartbeat is sent between machines at a regular interval in the order of seconds; a heartbeat message. If the endpoint does not receive a heartbeat for a time—usually a few heartbeat intervals—the machine that should have sent the heartbeat is assumed to have failed. Heartbeat messages are typically sent non-stop on a periodic or recurring basis from the originator's start-up until the originator's shutdown. When the destination identifies a lack of heartbeat messages during an anticipated arrival period, the destination may determine that the originator has failed, shutdown, or is generally no longer available. Heartbeat protocol A heartbeat protocol is generally used to negotiate and monitor the availability of a resource, such as a floating IP address, and the procedure involves sending network packets to all the nodes in the cluster to verify its reachability. Typically when a heartbeat starts on a machine, it will perform an election process with other machines on the heartbeat network to determine which machine, if any, owns the resource. On heartbeat networks of more than two machines, it is important to take into account partitioning, where two halves of the network could be functioning but not able to communicate with each other. In a situation such as this, it is important that the resource is only owned by o" https://en.wikipedia.org/wiki/Switched%20Multi-megabit%20Data%20Service,"Switched Multi-megabit Data Service (SMDS) was a connectionless service used to connect LANs, MANs and WANs to exchange data, in early 1990s. In Europe, the service was known as Connectionless Broadband Data Service (CBDS). SMDS was specified by Bellcore, and was based on the IEEE 802.6 metropolitan area network (MAN) standard, as implemented by Bellcore, and used cell relay transport, Distributed Queue Dual Bus layer-2 switching arbitrator, and standard SONET or G.703 as access interfaces. It is a switching service that provides data transmission in the range between 1.544 Mbit/s (T1 or DS1) to 45 Mbit/s (T3 or DS3). SMDS was developed by Bellcore as an interim service until Asynchronous Transfer Mode matured. SMDS was notable for its initial introduction of the 53-byte cell and cell switching approaches, as well as the method of inserting 53-byte cells onto G.703 and SONET. In the mid-1990s, SMDS was replaced, largely by Frame Relay." https://en.wikipedia.org/wiki/Lipidology,"Lipidology is the scientific study of lipids. Lipids are a group of biological macromolecules that have a multitude of functions in the body. Clinical studies on lipid metabolism in the body have led to developments in therapeutic lipidology for disorders such as cardiovascular disease. History Compared to other biomedical fields, lipidology was long-neglected as the handling of oils, smears, and greases was unappealing to scientists and lipid separation was difficult. It was not until 2002 that lipidomics, the study of lipid networks and their interaction with other molecules, appeared in the scientific literature. Attention to the field was bolstered by the introduction of chromatography, spectrometry, and various forms of spectroscopy to the field, allowing lipids to be isolated and analyzed. The field was further popularized following the cytologic application of the electron microscope, which led scientists to find that many metabolic pathways take place within, along, and through the cell membrane - the properties of which are strongly influenced by lipid composition. Clinical lipidology The Framingham Heart Study and other epidemiological studies have found a correlation between lipoproteins and cardiovascular disease (CVD). Lipoproteins are generally a major target of study in lipidology since lipids are transported throughout the body in the form of lipoproteins. A class of lipids known as phospholipids help make up what is known as lipoproteins, and a type of lipoprotein is called high density lipoprotein (HDL). A high concentration of high density lipoproteins-cholesterols (HDL-C) have what is known as a vasoprotective effect on the body, a finding that correlates with an enhanced cardiovascular effect. There is also a correlation between those with diseases such as chronic kidney disease, coronary artery disease, or diabetes mellitus and the possibility of low vasoprotective effect from HDL. Another factor of CVD that is often overlooked involves the" https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20symbols,"A mathematical symbol is a figure or a combination of figures that is used to represent a mathematical object, an action on mathematical objects, a relation between mathematical objects, or for structuring the other symbols that occur in a formula. As formulas are entirely constituted with symbols of various types, many symbols are needed for expressing all mathematics. The most basic symbols are the decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), and the letters of the Latin alphabet. The decimal digits are used for representing numbers through the Hindu–Arabic numeral system. Historically, upper-case letters were used for representing points in geometry, and lower-case letters were used for variables and constants. Letters are used for representing many other sorts of mathematical objects. As the number of these sorts has remarkably increased in modern mathematics, the Greek alphabet and some Hebrew letters are also used. In mathematical formulas, the standard typeface is italic type for Latin letters and lower-case Greek letters, and upright type for upper case Greek letters. For having more symbols, other typefaces are also used, mainly boldface , script typeface (the lower-case script face is rarely used because of the possible confusion with the standard face), German fraktur , and blackboard bold (the other letters are rarely used in this face, or their use is unconventional). The use of Latin and Greek letters as symbols for denoting mathematical objects is not described in this article. For such uses, see Variable (mathematics) and List of mathematical constants. However, some symbols that are described here have the same shape as the letter from which they are derived, such as and . These letters alone are not sufficient for the needs of mathematicians, and many other symbols are used. Some take their origin in punctuation marks and diacritics traditionally used in typography; others by deforming letter forms, as in the cases of and . Others, such as" https://en.wikipedia.org/wiki/Jorge%20Luis%20Borges%20and%20mathematics,"Jorge Luis Borges and mathematics concerns several modern mathematical concepts found in certain essays and short stories of Argentinian author Jorge Luis Borges (1899-1986), including concepts such as set theory, recursion, chaos theory, and infinite sequences, although Borges' strongest links to mathematics are through Georg Cantor's theory of infinite sets, outlined in ""The Doctrine of Cycles"" (La doctrina de los ciclos). Some of Borges' most popular works such as ""The Library of Babel"" (La Biblioteca de Babel), ""The Garden of Forking Paths"" (El Jardín de Senderos que se Bifurcan), ""The Aleph"" (El Aleph), an allusion to Cantor's use of the Hebrew letter aleph () to denote cardinality of transfinite sets, and ""The Approach to Al-Mu'tasim"" (El acercamiento a Almotásim) illustrate his use of mathematics. According to Argentinian mathematician Guillermo Martínez, Borges at least had a knowledge of mathematics at the level of first courses in algebra and analysis at a university – covering logic, paradoxes, infinity, topology and probability theory. He was also aware of the contemporary debates on the foundations of mathematics. Infinity and cardinality His 1939 essay ""Avatars of the Tortoise"" (Avatares de la Tortuga) is about infinity, and he opens by describing the book he would like to write on infinity: “five or seven years of metaphysical, theological, and mathematical training would prepare me (perhaps) for properly planning that book.” In Borges' 1941 story, ""The Library of Babel"", the narrator declares that the collection of books of a fixed number of orthographic symbols and pages is unending. However, since the permutations of twenty-five orthographic symbols is finite, the library has to be periodic and self-repeating. In his 1975 short story ""The Book of Sand"" (El Libro de Arena), he deals with another form of infinity; one whose elements are a dense set, that is, for any two elements, we can always find another between them. This concept was a" https://en.wikipedia.org/wiki/BasicX,"BasicX is a free programming language designed specifically for NetMedia's BX-24 microcontroller and based on the BASIC programming language. It is used in the design of robotics projects such as the Robodyssey Systems Mouse robot. Further reading Odom, Chris D. BasicX and Robotics. Robodyssey Systems LLC, External links NetMedia Home Page BasicX Free Downloads Sample Code , programmed in BasicX Videos, Sample Code, and Tutorials from the author of BasicX and Robotics BASIC compilers Embedded systems" https://en.wikipedia.org/wiki/List%20of%20works%20by%20Nikolay%20Bogolyubov,"List of some published works of Nikolay Bogolyubov in chronological order: 1924 N. N. Bogolyubov (1924). On the behavior of solutions of linear differential equations at infinity (). 1934 1937 N. N. Bogoliubov and N. M. Krylov (1937). ""La theorie generalie de la mesure dans son application a l'etude de systemes dynamiques de la mecanique non-lineaire"" (in French). Ann. Math. II 38: 65–113. Zbl. 16.86. 1945 1946 N. N. Bogoliubov (1946). ""Kinetic Equations"" (in Russian). Journal of Experimental and Theoretical Physics 16 (8): 691–702. N. N. Bogoliubov (1946). ""Kinetic Equations"" (in English). Journal of Physics 10 (3): 265–274. 1947 N. N. Bogoliubov, K. P. Gurov (1947). ""Kinetic Equations in Quantum Mechanics"" (in Russian). Journal of Experimental and Theoretical Physics 17 (7): 614–628. N. N. Bogoliubov (1947). ""К теории сверхтекучести"" (in Russian). Известия АН СССР, физика, 1947, 11, № 1, 77. N. N. Bogoliubov (1947). ""On the Theory of Superfluidity"" (in English). Journal of Physics 11 (1): 23–32. 1948 N. N. Bogoliubov (1948). ""Equations of Hydrodynamics in Statistical Mechanics"" (in Ukrainian). Sbornik Trudov Instituta Matematiki AN USSR 10: 41—59. 1949 N. N. Bogoliubov (1967—1970): Lectures on Quantum Statistics. Problems of Statistical Mechanics of Quantum Systems. New York, Gordon and Breach. 1955 1957 (1st edition) (3rd edition) N. N. Bogoliubov, O. S. Parasyuk (1957). ""Uber die Multiplikation der Kausalfunktionen in der Quantentheorie der Felder"" (in German). Acta Mathematica 97: 227–266. . 1958 N. N. Bogoliubov (1958). On a New Method in the Theory of Superconductivity. Journal of Experimental and Theoretical Physics 34 (1): 58. 1965 N. N. Bogolubov, B. V. Struminsky, A. N. Tavkhelidze (1965). On composite models in the theory of elementary particles. JINR Preprint D-1968, Dubna. External links Complete list Mathematics-related lists Bibliographies by writer" https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis,"Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix. Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer. Properties of agarose gel Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling temperature, depending on the sources, agarose gel has a gelling temperature of 35–42 °C and a melting temperature of 85–95 °C. Low-melting and low-gelling agaroses made through chemical modifications are also available. Agarose gel has large pore size and good gel strength, making it suitable as an anticonvection medium for the electrophoresis of DNA and large protein molecules. The pore size of a 1% gel has been estimated from 100 nm to 200–500 nm, and its gel strength allows gels as dilute " https://en.wikipedia.org/wiki/Ishango%20bone,"The Ishango bone, discovered at the ""Fisherman Settlement"" of Ishango in the Democratic Republic of Congo, is a bone tool and possible mathematical device that dates to the Upper Paleolithic era. The curved bone is dark brown in color, about 10 centimeters in length, and features a sharp piece of quartz affixed to one end, perhaps for engraving. Because the bone has been narrowed, scraped, polished, and engraved to a certain extent, it is no longer possible to determine what animal the bone belonged to, although it is assumed to belong to a mammal. The ordered engravings have led many to speculate the meaning behind these marks, including interpretations like mathematical significance or astrological relevance. It is thought by some to be a tally stick, as it features a series of what has been interpreted as tally marks carved in three columns running the length of the tool, though it has also been suggested that the scratches might have been to create a better grip on the handle or for some other non-mathematical reason. Others argue that the marks on the object are non-random and that it was likely a kind of counting tool and used to perform simple mathematical procedures. Other speculations include the engravings on the bone serving as a lunar calendar. Dating to 20,000 years before present, it is regarded as the oldest mathematical tool to humankind, with the possible exception of the approximately 40,000-year-old Lebombo bone from southern Africa. History Archaeological discovery The Ishango bone was found in 1950 by Belgian Jean de Heinzelin de Braucourt while exploring what was then the Belgian Congo. It was discovered in the area of Ishango near the Semliki River. Lake Edward empties into the Semliki which forms part of the headwaters of the Nile River (now on the border between modern-day Uganda and D.R. Congo). Some archaeologists believe the prior inhabitants of Ishango were a ""pre-sapiens species"". However, the most recent inhabitants, who gave the a" https://en.wikipedia.org/wiki/Millennium%20Mathematics%20Project,"The Millennium Mathematics Project (MMP) was set up within the University of Cambridge in England as a joint project between the Faculties of Mathematics and Education in 1999. The MMP aims to support maths education for pupils of all abilities from ages 5 to 19 and promote the development of mathematical skills and understanding, particularly through enrichment and extension activities beyond the school curriculum, and to enhance the mathematical understanding of the general public. The project was directed by John Barrow from 1999 until September 2020. Programmes The MMP includes a range of complementary programmes: The NRICH website publishes free mathematics education enrichment material for ages 5 to 19. NRICH material focuses on problem-solving, building core mathematical reasoning and strategic thinking skills. In the academic year 2004/5 the website attracted over 1.7 million site visits (more than 49 million hits). Plus Magazine is a free online maths magazine for age 15+ and the general public. In 2004/5, Plus attracted over 1.3 million website visits (more than 31 million hits). The website won the Webby award in 2001 for the best Science site on the Internet. The Motivate video-conferencing project links university mathematicians and scientists to primary and secondary schools in areas of the UK from Jersey and Belfast to Glasgow and inner-city London, with international links to Pakistan, South Africa, India and Singapore. The project has also developed a Hands On Maths Roadshow presenting creative methods of exploring mathematics, and in 2004 took on the running of Simon Singh's Enigma schools workshops, exploring maths through cryptography and codebreaking. Both are taken to primary and secondary schools and public venues such as shopping centres across the UK and Ireland. James Grime is the Enigma Project Officer and gives talks in schools and to the general public about the history and mathematics of code breaking - including the demonstration of" https://en.wikipedia.org/wiki/Bond%20Bridge,"Bond Bridge is a Wi-Fi device that communicates with infra-red or RF controlled devices, such as ceiling fans, shades, and fireplaces. These devices often come with a battery powered remote control. The bond bridge receives commands form a network port, and it forwards the commands to the remote controlled device by simulating the signals the remote control would produce. The photo shows a bond bridge in the dining room of Baywood Court, a senior community. The bond bridge controls ceiling fans in the dining room. Broadlink MR4 is a competing product." https://en.wikipedia.org/wiki/Ecological%20facilitation,"Ecological facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Facilitations can be categorized as mutualisms, in which both species benefit, or commensalisms, in which one species benefits and the other is unaffected. This article addresses both the mechanisms of facilitation and the increasing information available concerning the impacts of facilitation on community ecology. Categories There are two basic categories of facilitative interactions: Mutualism is an interaction between species that is beneficial to both. A familiar example of a mutualism is the relationship between flowering plants and their pollinators. The plant benefits from the spread of pollen between flowers, while the pollinator receives some form of nourishment, either from nectar or the pollen itself. Commensalism is an interaction in which one species benefits and the other species is unaffected. Epiphytes (plants growing on other plants, usually trees) have a commensal relationship with their host plant because the epiphyte benefits in some way (e.g., by escaping competition with terrestrial plants or by gaining greater access to sunlight) while the host plant is apparently unaffected. Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources. Mechanisms The beneficial effects of species on one another are realized in various ways, including refuge from physical stress, predation, and competi" https://en.wikipedia.org/wiki/Chemosynthesis,"In biochemistry, chemosynthesis is the biological conversion of one or more carbon-containing molecules (usually carbon dioxide or methane) and nutrients into organic matter using the oxidation of inorganic compounds (e.g., hydrogen gas, hydrogen sulfide) or ferrous ions as a source of energy, rather than sunlight, as in photosynthesis. Chemoautotrophs, organisms that obtain carbon from carbon dioxide through chemosynthesis, are phylogenetically diverse. Groups that include conspicuous or biogeochemically important taxa include the sulfur-oxidizing Gammaproteobacteria, the Campylobacterota, the Aquificota, the methanogenic archaea, and the neutrophilic iron-oxidizing bacteria. Many microorganisms in dark regions of the oceans use chemosynthesis to produce biomass from single-carbon molecules. Two categories can be distinguished. In the rare sites where hydrogen molecules (H2) are available, the energy available from the reaction between CO2 and H2 (leading to production of methane, CH4) can be large enough to drive the production of biomass. Alternatively, in most oceanic environments, energy for chemosynthesis derives from reactions in which substances such as hydrogen sulfide or ammonia are oxidized. This may occur with or without the presence of oxygen. Many chemosynthetic microorganisms are consumed by other organisms in the ocean, and symbiotic associations between chemosynthesizers and respiring heterotrophs are quite common. Large populations of animals can be supported by chemosynthetic secondary production at hydrothermal vents, methane clathrates, cold seeps, whale falls, and isolated cave water. It has been hypothesized that anaerobic chemosynthesis may support life below the surface of Mars, Jupiter's moon Europa, and other planets. Chemosynthesis may have also been the first type of metabolism that evolved on Earth, leading the way for cellular respiration and photosynthesis to develop later. Hydrogen sulfide chemosynthesis process Giant tube worms" https://en.wikipedia.org/wiki/Whitewater%20Interactive%20System%20Development%20with%20Object%20Models,"Wisdom (Whitewater Interactive System Development with Object Models) is a software development process and method to design software-intensive interactive systems. It is based on object modelling, and focuses human-computer interaction (HCI) in order to model the software architecture of the system i.e. it is architecture-centric. The focus on HCI while being architecture-centric places Wisdom as a pioneer method within human-centered software engineering. Wisdom was conceived by Nuno Nunes and first published in the years 1999-2000 in order to close the gaps of existing software engineering methods regarding the user interface design. Notably, the Wisdom method identifies for each use case the tasks of the user, the interaction spaces of the user interface, and the system responsibilities that support that user activity, which are complemented with the data entities used in each case, completing a usable software architecture, an MVC model. The Wisdom model clarifies the relation between the human and the computer-based system, allows rationalization over the software artifacts that must be implemented, therefore facilitating effort affection for a software development team. From Wisdom, other relevant contributions were derived targeting the enhancement of software development based on the Wisdom model, such as: CanonSketch, Hydra Framework Cruz's Another relevant contribution is related to effort estimation of software development, the iUCP method, which is based in traditional UCP method leveling the estimation based on the predicted user interface design. A comparison study was carried out using both methods, revealing that there is positive effect in the usage of iUCP when compared to UCP when considering the user interface design, a recurrent situation in nowadays software systems development." https://en.wikipedia.org/wiki/Conway%20polyhedron%20notation,"In geometry, Conway polyhedron notation, invented by John Horton Conway and promoted by George W. Hart, is used to describe polyhedra based on a seed polyhedron modified by various prefix operations. Conway and Hart extended the idea of using operators, like truncation as defined by Kepler, to build related polyhedra of the same symmetry. For example, represents a truncated cube, and , parsed as , is (topologically) a truncated cuboctahedron. The simplest operator dual swaps vertex and face elements; e.g., a dual cube is an octahedron: . Applied in a series, these operators allow many higher order polyhedra to be generated. Conway defined the operators (ambo), (bevel), (dual), (expand), (gyro), (join), (kis), (meta), (ortho), (snub), and (truncate), while Hart added (reflect) and (propellor). Later implementations named further operators, sometimes referred to as ""extended"" operators. Conway's basic operations are sufficient to generate the Archimedean and Catalan solids from the Platonic solids. Some basic operations can be made as composites of others: for instance, ambo applied twice is the expand operation (), while a truncation after ambo produces bevel (). Polyhedra can be studied topologically, in terms of how their vertices, edges, and faces connect together, or geometrically, in terms of the placement of those elements in space. Different implementations of these operators may create polyhedra that are geometrically different but topologically equivalent. These topologically equivalent polyhedra can be thought of as one of many embeddings of a polyhedral graph on the sphere. Unless otherwise specified, in this article (and in the literature on Conway operators in general) topology is the primary concern. Polyhedra with genus 0 (i.e. topologically equivalent to a sphere) are often put into canonical form to avoid ambiguity. Operators In Conway's notation, operations on polyhedra are applied like functions, from right to left. For example, a" https://en.wikipedia.org/wiki/MPLAB%20devices,"The MPLAB series of devices are programmers and debuggers for Microchip PIC and dsPIC microcontrollers, developed by Microchip Technology. The ICD family of debuggers has been produced since the release of the first Flash-based PIC microcontrollers, and the latest ICD 3 currently supports all current PIC and dsPIC devices. It is the most popular combination debugging/programming tool from Microchip. The REAL ICE emulator is similar to the ICD, with the addition of better debugging features, and various add-on modules that expand its usage scope. The ICE is a family of discontinued in-circuit emulators for PIC and dsPIC devices, and is currently superseded by the REAL ICE. MPLAB ICD The MPLAB ICD is the first in-circuit debugger product by Microchip, and is currently discontinued and superseded by ICD 2. The ICD connected to the engineer's PC via RS-232, and connected to the device via ICSP. The ICD supported devices within the PIC16C and PIC16F families, and supported full speed execution, or single step interactive debugging. Only one hardware breakpoint was supported by the ICD. MPLAB ICD 2 The MPLAB ICD 2 is a discontinued in-circuit debugger and programmer by Microchip, and is currently superseded by ICD 3. The ICD 2 connects to the engineer's PC via USB or RS-232, and connects to the device via ICSP. The ICD 2 supports most PIC and dsPIC devices within the PIC10, PIC12, PIC16, PIC18, dsPIC, rfPIC and PIC32 families, and supports full speed execution, or single step interactive debugging. At breakpoints, data and program memory can be read and modified using the MPLAB IDE. The ICD 2 firmware is field upgradeable using the MPLAB IDE. The ICD 2 can be used to erase, program or reprogram PIC MCU program memory, while the device is installed on target hardware, using ICSP. Target device voltages from 2.0V to 6.0V are supported. MPLAB ICD 3 The MPLAB ICD 3 is an in-circuit debugger and programmer by Microchip, and is the latest in the ICD series. The ICD 3 " https://en.wikipedia.org/wiki/Broadcast%2C%20unknown-unicast%20and%20multicast%20traffic,"Broadcast, unknown-unicast and multicast traffic (BUM traffic) is network traffic transmitted using one of three methods of sending data link layer network traffic to a destination of which the sender does not know the network address. This is achieved by sending the network traffic to multiple destinations on an Ethernet network. As a concept related to computer networking, it includes three types of Ethernet modes: broadcast, unicast and multicast Ethernet. BUM traffic refers to that kind of network traffic that will be forwarded to multiple destinations or that cannot be addressed to the intended destination only. Overview Broadcast traffic is used to transmit a message to any reachable destination in the network without the need to know any information about the receiving party. When broadcast traffic is received by a network switch it is replicated to all ports within the respective VLAN except the one from which the traffic comes from. Unknown-unicast traffic happens when a switch receives unicast traffic intended to be delivered to a destination that is not in its forwarding information base. In this case the switch marks the frame for flooding and sends it to all forwarding ports within the respective VLAN. Forwarding this type of traffic can create unnecessary traffic that leads to poor network performance or even a complete loss of network service. This flooding of packets is known as a unicast flooding. Multicast traffic allows a host to contact a subset of hosts or devices joined into a group. This causes the message to be broadcast when no group management mechanism is present. Flooding BUM frames is required in transparent bridging and in a data center context this does not scale well causing poor performance. BUM traffic control Throttling One issue that may arise is that some network devices cannot handle high rates of broadcast, unknown-unicast or multicast traffic. In such cases, it is possible to limit the BUM traffic for specific ports in" https://en.wikipedia.org/wiki/Active%20and%20passive%20transformation,"Geometric transformations can be distinguished into two types: active or alibi transformations which change the physical position of a set of points relative to a fixed frame of reference or coordinate system (alibi meaning ""being somewhere else at the same time""); and passive or alias transformations which leave points fixed but change the frame of reference or coordinate system relative to which they are described (alias meaning ""going under a different name""). By transformation, mathematicians usually refer to active transformations, while physicists and engineers could mean either. For instance, active transformations are useful to describe successive positions of a rigid body. On the other hand, passive transformations may be useful in human motion analysis to observe the motion of the tibia relative to the femur, that is, its motion relative to a (local) coordinate system which moves together with the femur, rather than a (global) coordinate system which is fixed to the floor. In three-dimensional Euclidean space, any proper rigid transformation, whether active or passive, can be represented as a screw displacement, the composition of a translation along an axis and a rotation about that axis. The terms active transformation and passive transformation were first introduced in 1957 by Valentine Bargmann for describing Lorentz transformations in special relativity. Example As an example, let the vector , be a vector in the plane. A rotation of the vector through an angle θ in counterclockwise direction is given by the rotation matrix: which can be viewed either as an active transformation or a passive transformation (where the above matrix will be inverted), as described below. Spatial transformations in the Euclidean space R3 In general a spatial transformation may consist of a translation and a linear transformation. In the following, the translation will be omitted, and the linear transformation will be represented by a 3×3 matrix . Active transfo" https://en.wikipedia.org/wiki/List%20of%20triangle%20inequalities,"In geometry, triangle inequalities are inequalities involving the parameters of triangles, that hold for every triangle, or for every triangle meeting certain conditions. The inequalities give an ordering of two different values: they are of the form ""less than"", ""less than or equal to"", ""greater than"", or ""greater than or equal to"". The parameters in a triangle inequality can be the side lengths, the semiperimeter, the angle measures, the values of trigonometric functions of those angles, the area of the triangle, the medians of the sides, the altitudes, the lengths of the internal angle bisectors from each angle to the opposite side, the perpendicular bisectors of the sides, the distance from an arbitrary point to another point, the inradius, the exradii, the circumradius, and/or other quantities. Unless otherwise specified, this article deals with triangles in the Euclidean plane. Main parameters and notation The parameters most commonly appearing in triangle inequalities are: the side lengths a, b, and c; the semiperimeter s = (a + b + c) / 2 (half the perimeter p); the angle measures A, B, and C of the angles of the vertices opposite the respective sides a, b, and c (with the vertices denoted with the same symbols as their angle measures); the values of trigonometric functions of the angles; the area T of the triangle; the medians ma, mb, and mc of the sides (each being the length of the line segment from the midpoint of the side to the opposite vertex); the altitudes ha, hb, and hc (each being the length of a segment perpendicular to one side and reaching from that side (or possibly the extension of that side) to the opposite vertex); the lengths of the internal angle bisectors ta, tb, and tc (each being a segment from a vertex to the opposite side and bisecting the vertex's angle); the perpendicular bisectors pa, pb, and pc of the sides (each being the length of a segment perpendicular to one side at its midpoint and reaching to one of the other sides); t" https://en.wikipedia.org/wiki/Commutative%20property,"In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says something like or , the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, ); such operations are not commutative, and so are referred to as noncommutative operations. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symmetric as two equal mathematical objects are equal regardless of their order. Mathematical definitions A binary operation on a set S is called commutative if In other words, an operation is commutative if every two elements commute. An operation that does not satisfy the above property is called noncommutative. One says that commutes with or that and commute under if That is, a specific pair of elements may commute even if the operation is (strictly) noncommutative. Examples Commutative operations Addition and multiplication are commutative in most number systems, and, in particular, between natural numbers, integers, rational numbers, real numbers and complex numbers. This is also true in every field. Addition is commutative in every vector space and in every algebra. Union and intersection are commutative operations on sets. ""And"" and ""or"" are commutative logical operations. Noncommutative operations Some noncommutative binary operations: Division, subtraction, and exponentiat" https://en.wikipedia.org/wiki/Stochastic%20resonance,"Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor, can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal. This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research. Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics. Technical description Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is ""sub-threshold."" For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak. Strictly sp" https://en.wikipedia.org/wiki/Sociology%20of%20food,"The sociology of food is the study of food as it relates to the history, progression, and future development of society, encompassing its production, preparation, consumption, and distribution, its medical, ritual, spiritual, ethical and cultural applications, and related environmental and labor issues. The aspect of food distribution in our society can be examined through the analysis of the changes in the food supply chain. Globalization in particular, has significant effects on the food supply chain by enabling scale effect in the food distribution industry. Food distribution Impact from scale effects Scale effects resulting from centralized acquisition purchase centres in the food supply chain favor large players such as big retailers or distributors in the food distribution market. This is due to the fact that they can utilize their strong market power and financial advantage over smaller players. Having both strong market power and greater access to the financial credit market meant that they can impose barriers to entry and cement their position in the food distribution market. This would result in a food distribution chain that is characterized by large players on one end and small players choosing niche markets to operate in on the other end. The existence of smaller players in specialized food distribution markets could be attributed to their shrinking market share and their inability to compete with the larger players due to the scale effects. Through this mechanism, globalization has displaced smaller role players. Another mechanism troubling the specialized food distribution markets is the ability of distribution chains to possess their own brand. Stores with their own brand are able to combat price wars between competitors by lowering the price of their own brand, thus making consumers more likely to purchase goods from them. Early history and culture Since the beginning of mankind, food was important simply for the purpose of nourishment. As prim" https://en.wikipedia.org/wiki/Variable%20structure%20system,"A variable structure system, or VSS, is a discontinuous nonlinear system of the form where is the state vector, is the time variable, and is a piecewise continuous function. Due to the piecewise continuity of these systems, they behave like different continuous nonlinear systems in different regions of their state space. At the boundaries of these regions, their dynamics switch abruptly. Hence, their structure varies over different parts of their state space. The development of variable structure control depends upon methods of analyzing variable structure systems, which are special cases of hybrid dynamical systems. See also Variable structure control Sliding mode control Hybrid system Nonlinear control Robust control Optimal control H-bridge – A topology that combines four switches forming the four legs of an ""H"". Can be used to drive a motor (or other electrical device) forward or backward when only a single supply is available. Often used in actuator sliding-mode control systems. Switching amplifier – Uses switching-mode control to drive continuous outputs Delta-sigma modulation – Another (feedback) method of encoding a continuous range of values in a signal that rapidly switches between two states (i.e., a kind of specialized sliding-mode control) Pulse-density modulation – A generalized form of delta-sigma modulation Pulse-width modulation – Another modulation scheme that produces continuous motion through discontinuous switching" https://en.wikipedia.org/wiki/Zero-forcing%20precoding,"Zero-forcing (or null-steering) precoding is a method of spatial signal processing by which a multiple antenna transmitter can null the multiuser interference in a multi-user MIMO wireless communication system. When the channel state information is perfectly known at the transmitter, the zero-forcing precoder is given by the pseudo-inverse of the channel matrix. Mathematical description In a multiple antenna downlink system which comprises transmit antenna access points and single receive antenna users, such that , the received signal of user is described as where is the vector of transmitted symbols, is the noise signal, is the channel vector and is some linear precoding vector. Here is the matrix transpose, is the square root of transmit power, and is the message signal with zero mean and variance . The above signal model can be more compactly re-written as where is the received signal vector, is channel matrix, is the precoding matrix, is a diagonal power matrix, and is the transmit signal. A zero-forcing precoder is defined as a precoder where intended for user is orthogonal to every channel vector associated with users where . That is, Thus the interference caused by the signal meant for one user is effectively nullified for rest of the users via zero-forcing precoder. From the fact that each beam generated by zero-forcing precoder is orthogonal to all the other user channel vectors, one can rewrite the received signal as The orthogonality condition can be expressed in matrix form as where is some diagonal matrix. Typically, is selected to be an identity matrix. This makes the right Moore-Penrose pseudo-inverse of given by Given this zero-forcing precoder design, the received signal at each user is decoupled from each other as Quantify the feedback amount Quantify the amount of the feedback resource required to maintain at least a given throughput performance gap between zero-forcing with perfect feedback and wi" https://en.wikipedia.org/wiki/Adult,"An adult is a human or other animal that has reached full growth. The biological definition of the word means an animal reaching sexual maturity and thus capable of reproduction. In the human context, the term adult has meanings associated with social and legal concepts. In contrast to a non-adult or ""minor"", a legal adult is a person who has attained the age of majority and is therefore regarded as independent, self-sufficient, and responsible. They may also be regarded as a ""major"". The typical age of attaining legal adulthood is 18 to 21, although definition may vary by legal rights, country, and psychological development. Human adulthood encompasses psychological adult development. Definitions of adulthood are often inconsistent and contradictory; a person may be biologically an adult, and have adult behavior, but still be treated as a child if they are under the legal age of majority. Conversely, one may legally be an adult but possess none of the maturity and responsibility that may define an adult character. In different cultures there are events that relate passing from being a child to becoming an adult or coming of age. This often encompasses the passing a series of tests to demonstrate that a person is prepared for adulthood, or reaching a specified age, sometimes in conjunction with demonstrating preparation. Most modern societies determine legal adulthood based on reaching a legally specified age without requiring a demonstration of physical maturity or preparation for adulthood. Biological adulthood Historically and cross-culturally, adulthood has been determined primarily by the start of puberty (the appearance of secondary sex characteristics such as menstruation and the development of breasts in women, ejaculation, the development of facial hair, and a deeper voice in men, and pubic hair in both sexes). In the past, a person usually moved from the status of child directly to the status of adult, often with this shift being marked by some type of" https://en.wikipedia.org/wiki/Vagrancy%20%28biology%29,"Vagrancy is a phenomenon in biology whereby an individual animal (usually a bird) appears well outside its normal range; they are known as vagrants. The term accidental is sometimes also used. There are a number of poorly understood factors which might cause an animal to become a vagrant, including internal causes such as navigatory errors (endogenous vagrancy) and external causes such as severe weather (exogenous vagrancy). Vagrancy events may lead to colonisation and eventually to speciation. Birds In the Northern Hemisphere, adult birds (possibly inexperienced younger adults) of many species are known to continue past their normal breeding range during their spring migration and end up in areas further north (such birds are termed spring overshoots). In autumn, some young birds, instead of heading to their usual wintering grounds, take ""incorrect"" courses and migrate through areas which are not on their normal migration path. For example, Siberian passerines which normally winter in Southeast Asia are commonly found in Northwest Europe, e.g. Arctic warblers in Britain. This is reverse migration, where the birds migrate in the opposite direction to that expected (say, flying north-west instead of south-east). The causes of this are unknown, but genetic mutation or other anomalies relating to the bird's magnetic sensibilities is suspected. Other birds are sent off course by storms, such as some North American birds blown across the Atlantic Ocean to Europe. Birds can also be blown out to sea, become physically exhausted, land on a ship and end up being carried to the ship's destination. While many vagrant birds do not survive, if sufficient numbers wander to a new area they can establish new populations. Many isolated oceanic islands are home to species that are descended from landbirds blown out to sea, Hawaiian honeycreepers and Darwin's finches being prominent examples. Insects Vagrancy in insects is recorded from many groups—it is particularly well-stu" https://en.wikipedia.org/wiki/Test%20vector,"In computer science and engineering, a test vector is a set of inputs provided to a system in order to test that system. In software development, test vectors are a methodology of software testing and software verification and validation. Rationale In computer science and engineering, a system acts as a computable function. An example of a specific function could be where is the output of the system and is the input; however, most systems' inputs are not one-dimensional. When the inputs are multi-dimensional, we could say that the system takes the form ; however, we can generalize this equation to a general form where is the result of the system's execution, belongs to the set of computable functions, and is an input vector. While testing the system, various test vectors must be used to examine the system's behavior with differing inputs. Example For example, consider a login page with two input fields: a username field and a password field. In that case, the login system can be described as: with and , with designating login successful, and designating login failure, respectively. Making things more generic, we can suggest that the function takes input as a 2-dimensional vector and outputs a one-dimensional vector (scalar). This can be written in the following way:- with In this case, is called the input vector, and is called the output vector. In order to test the login page, it is necessary to pass some sample input vectors . In this context is called a test vector. See also Automatic test pattern generation" https://en.wikipedia.org/wiki/IBM%20System/360%20architecture,"The IBM System/360 architecture is the model independent architecture for the entire S/360 line of mainframe computers, including but not limited to the instruction set architecture. The elements of the architecture are documented in the IBM System/360 Principles of Operation and the IBM System/360 I/O Interface Channel to Control Unit Original Equipment Manufacturers' Information manuals. Features The System/360 architecture provides the following features: 16 32-bit general-purpose registers 4 64-bit floating-point registers 64-bit processor status register (PSW), which includes a 24-bit instruction address 24-bit (16 MB) byte-addressable memory space Big-endian byte/word order A standard instruction set, including fixed-point binary arithmetic and logical instructions, present on all System/360 models (except the Model 20, see below). A commercial instruction set, adding decimal arithmetic instructions, is optional on some models, as is a scientific instruction set, which adds floating-point instructions. The universal instruction set includes all of the above plus the storage protection instructions and is standard for some models. The Model 44 provides a few unique instructions for data acquisition and real-time processing and is missing the storage-to-storage instructions. However, IBM offered a Commercial Instruction Set"" feature that ran in bump storage and simulated the missing instructions. The Model 20 offers a stripped-down version of the standard instruction set, limited to eight general registers with halfword (16-bit) instructions only, plus the commercial instruction set, and unique instructions for input/output. The Model 67 includes some instructions to handle 32-bit addresses and ""dynamic address translation"", with additional privileged instructions to provide virtual memory. Memory Memory (storage) in System/360 is addressed in terms of 8-bit bytes. Various instructions operate on larger units called halfword (2 bytes), fullword (4 " https://en.wikipedia.org/wiki/Automotive%20navigation%20system,"An automotive navigation system is part of the automobile controls or a third party add-on used to find direction in an automobile. It typically uses a satellite navigation device to get its position data which is then correlated to a position on a road. When directions are needed routing can be calculated. On the fly traffic information (road closures, congestion) can be used to adjust the route. Dead reckoning using distance data from sensors attached to the drivetrain, an accelerometer, a gyroscope, and a magnetometer can be used for greater reliability, as GNSS signal loss and/or multipath can occur due to urban canyons or tunnels. Mathematically, automotive navigation is based on the shortest path problem, within graph theory, which examines how to identify the path that best meets some criteria (shortest, cheapest, fastest, etc.) between two points in a large network. Automotive navigation systems are crucial for the development of self-driving cars. History Automotive navigation systems represent a convergence of a number of diverse technologies, many of which have been available for many years, but were too costly or inaccessible. Limitations such as batteries, display, and processing power had to be overcome before the product became commercially viable. 1961: Hidetsugu Yagi designed a wireless-based navigation system. This design was still primitive and intended for military-use. 1966: General Motors Research (GMR) was working on a non-satellite-based navigation and assistance system called DAIR (Driver Aid, Information & Routing). After initial tests GM found that it was not a scalable or practical way to provide navigation assistance. Decades later, however, the concept would be reborn as OnStar (founded 1996). 1973: Japan's Ministry of International Trade and Industry (MITI) and Fuji Heavy Industries sponsored CATC (Comprehensive Automobile Traffic Control), a Japanese research project on automobile navigation systems. 1979: MITI established JSK (A" https://en.wikipedia.org/wiki/Chamfer,"A chamfer or is a transitional edge between two faces of an object. Sometimes defined as a form of bevel, it is often created at a 45° angle between two adjoining right-angled faces. Chamfers are frequently used in machining, carpentry, furniture, concrete formwork, mirrors, and to facilitate assembly of many mechanical engineering designs. Terminology In machining the word bevel is not used to refer to a chamfer. Machinists use chamfers to ""ease"" otherwise sharp edges, both for safety and to prevent damage to the edges. A chamfer may sometimes be regarded as a type of bevel, and the terms are often used interchangeably. In furniture-making, a lark's tongue is a chamfer which ends short of a piece in a gradual outward curve, leaving the remainder of the edge as a right angle. Chamfers may be formed in either inside or outside adjoining faces of an object or room. By comparison, a fillet is the rounding-off of an interior corner, and a round (or radius) the rounding of an outside one. Carpentry and furniture Chamfers are used in furniture such as counters and table tops to ease their edges to keep people from bruising themselves in the otherwise sharp corner. When the edges are rounded instead, they are called bullnosed. Special tools such as chamfer mills and chamfer planes are sometimes used. Architecture Chamfers are commonly used in architecture, both for functional and aesthetic reasons. For example, the base of the Taj Mahal is a cube with chamfered corners, thereby creating an octagonal architectural footprint. Its great gate is formed of chamfered base stones and chamfered corbels for a balcony or equivalent cornice towards the roof. Urban planning Many city blocks in Barcelona, Valencia and various other cities in Spain, and street corners (curbs) in Ponce, Puerto Rico, are chamfered. The chamfering was designed as an embellishment and a modernization of urban space in Barcelona's mid-19th century Eixample or Expansion District, where the bui" https://en.wikipedia.org/wiki/Host%E2%80%93pathogen%20interaction,"The host–pathogen interaction is defined as how microbes or viruses sustain themselves within host organisms on a molecular, cellular, organismal or population level. This term is most commonly used to refer to disease-causing microorganisms although they may not cause illness in all hosts. Because of this, the definition has been expanded to how known pathogens survive within their host, whether they cause disease or not. On the molecular and cellular level, microbes can infect the host and divide rapidly, causing disease by being there and causing a homeostatic imbalance in the body, or by secreting toxins which cause symptoms to appear. Viruses can also infect the host with virulent DNA, which can affect normal cell processes (transcription, translation, etc.), protein folding, or evading the immune response. Pathogenicity Pathogen history One of the first pathogens observed by scientists was Vibrio cholerae, described in detail by Filippo Pacini in 1854. His initial findings were just drawings of the bacteria but, up until 1880, he published many other papers concerning the bacteria. He described how it causes diarrhea as well as developed effective treatments against it. Most of these findings went unnoticed until Robert Koch rediscovered the organism in 1884 and linked it to the disease. was discovered by Leeuwenhoeck in the 1600s< but was not found to be pathogenic until the 1970s, when an EPA-sponsored symposium was held following a large outbreak in Oregon involving the parasite. Since then, many other organisms have been identified as pathogens, such as H. pylori and E. coli, which have allowed scientists to develop antibiotics to combat these harmful microorganisms. Types of pathogens Pathogens include bacteria, fungi, protozoa, helminths, and viruses. Each of these different types of organisms can then be further classified as a pathogen based on its mode of transmission. This includes the following: food borne, airborne, waterborne, blood-bor" https://en.wikipedia.org/wiki/Front%20%28physics%29,"In physics, a front can be understood as an interface between two different possible states (either stable or unstable) in a physical system. For example, a weather front is the interface between two different density masses of air, in combustion where the flame is the interface between burned and unburned material or in population dynamics where the front is the interface between populated and unpopulated places. Fronts can be static or mobile depending on the conditions of the system, and the causes of the motion can be the variation of a free energy, where the most energetically favorable state invades the less favorable one, according to Pomeau or shape induced motion due to non-variation dynamics in the system, according to Alvarez-Socorro, Clerc, González-Cortés and Wilson. From a mathematical point of view, fronts are solutions of spatially extended systems connecting two steady states, and from dynamical systems point of view, a front corresponds to a heteroclinic orbit of the system in the co-mobile frame (or proper frame). Fronts connecting stable - unstable homogeneous states The most simple example of front solution connecting a homogeneous stable state with a homogeneous unstable state can be shown in the one-dimensional Fisher–Kolmogorov equation: that describes a simple model for the density of population. This equation has two steady states, , and . This solution corresponds to extinction and saturation of population. Observe that this model is spatially-extended, because it includes a diffusion term given by the second derivative. The state is stable as a simple linear analysis can show and the state is unstable. There exist a family of front solutions connecting with , and such solution are propagative. Particularly, there exist one solution of the form , with is a velocity that only depends on and" https://en.wikipedia.org/wiki/List%20of%20statistics%20articles," 0–9 1.96 2SLS (two-stage least squares) redirects to instrumental variable 3SLS – see three-stage least squares 68–95–99.7 rule 100-year flood A A priori probability Abductive reasoning Absolute deviation Absolute risk reduction Absorbing Markov chain ABX test Accelerated failure time model Acceptable quality limit Acceptance sampling Accidental sampling Accuracy and precision Accuracy paradox Acquiescence bias Actuarial science Adapted process Adaptive estimator Additive Markov chain Additive model Additive smoothing Additive white Gaussian noise Adjusted Rand index – see Rand index (subsection) ADMB software Admissible decision rule Age adjustment Age-standardized mortality rate Age stratification Aggregate data Aggregate pattern Akaike information criterion Algebra of random variables Algebraic statistics Algorithmic inference Algorithms for calculating variance All models are wrong All-pairs testing Allan variance Alignments of random points Almost surely Alpha beta filter Alternative hypothesis Analyse-it – software Analysis of categorical data Analysis of covariance Analysis of molecular variance Analysis of rhythmic variance Analysis of variance Analytic and enumerative statistical studies Ancestral graph Anchor test Ancillary statistic ANCOVA redirects to Analysis of covariance Anderson–Darling test ANOVA ANOVA on ranks ANOVA–simultaneous component analysis Anomaly detection Anomaly time series Anscombe transform Anscombe's quartet Antecedent variable Antithetic variates Approximate Bayesian computation Approximate entropy Arcsine distribution Area chart Area compatibility factor ARGUS distribution Arithmetic mean Armitage–Doll multistage model of carcinogenesis Arrival theorem Artificial neural network Ascertainment bias ASReml software Association (statistics) Association mapping Association scheme Assumed mean Astrostatistics Asymptotic distribution Asymptotic equipartition property (information theory) Asymptotic normality redirects to Asymptotic dis" https://en.wikipedia.org/wiki/In-phase%20and%20quadrature%20components,"A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are in quadrature phase, i.e., with a phase offset of one-quarter cycle (90 degrees or /2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier. Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions. The implication is that the modulations in some signal can be treated separately from the carrier wave of the signal. This has extensive use in many radio and signal processing applications. I/Q data is used to represent the modulations of some carrier, independent of that carrier's frequency. Orthogonality In vector analysis, a vector with polar coordinates and Cartesian coordinates can be represented as the sum of orthogonal components: Similarly in trigonometry, the angle sum identity expresses: And in functional analysis, when is a linear function of some variable, such as time, these components are sinusoids, and they are orthogonal functions. A phase-shift of changes the identity to: , in which case is the in-phase component. In both conventions is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component. Narrowband signal model In an angle modulation application, with carrier frequency φ is also a time-variant function, giving: When all three terms above are multiplied by an optional amplitude function, the left-hand side of the equality is known as the amplitude/phase form, and the right-hand side is the quadrature-carrier or IQ form. Because of the modulation, the components are no longer completely orthogonal functions. But when and are slowly varying functions compared" https://en.wikipedia.org/wiki/Census%20of%20Diversity%20of%20Abyssal%20Marine%20Life,"The Census of Diversity of Abyssal Marine Life (CeDAMar) is a field project of the Census of Marine Life that studies the species diversity of one of the largest and most inaccessible environments on the planet, the abyssal plain. CeDAMar uses data to create an estimation of global species diversity and provide a better understanding of the history of deep-sea fauna, including its present diversity and dependence on environmental parameters. CeDAMar initiatives aim to identify centers of high biodiversity useful for planning both commercial and conservation efforts, and are able to be used in future studies on the effects of climate change on the deep sea. As of May 2009, participation by upwards of 56 institutions in 17 countries has resulted in the publication of nearly 300 papers. Results of CeDAMar-related research were also published in a 2010 textbook on deep-sea biodiversity by Michael Rex and Ron Etter, members of CeDAMar's Scientific Steering Committee.() CeDAMar is led by Dr. Pedro Martinez Arbizu of Germany and Dr. Craig Smith, USA. External links Census of Diversity of Abyssal Marine Life Census of Antarctic Marine Life official web site" https://en.wikipedia.org/wiki/Autocorrelator,"A real time interferometric autocorrelator is an electronic tool used to examine the autocorrelation of, among other things, optical beam intensity and spectral components through examination of variable beam path differences. See Optical autocorrelation. Description In an interferometric autocorrelator, the input beam is split into a fixed path beam and a variable path beam using a standard beamsplitter. The fixed path beam travels a known and constant distance, whereas the variable path beam has its path length changed via rotating mirrors or other path changing mechanisms. At the end of the two paths, the beams are ideally parallel, but slightly separated, and using a correctly positioned lens, the two beams are crossed inside a second-harmonic generating (SHG) crystal. The autocorrelation term of the output is then passed into a photomultiplying tube (PMT) and measured. Details Considering the input beam as a single pulse with envelope , the constant fixed path distance as , and the variable path distance as a function of time , the input to the SHG can be viewed as This comes from being the speed of light and being the time for the beam to travel the given path. In general, SHG produces output proportional to the square of the input, which in this case is The first two terms are based only on the fixed and variable paths respectively, but the third term is based on the difference between them, as is evident in The PMT used is assumed to be much slower than the envelope function , so it effectively integrates the incoming signal Since both the fixed path and variable path terms are not dependent on each other, they would constitute a background ""noise"" in examination of the autocorrelation term and would ideally be removed first. This can be accomplished by examining the momentum vectors If the fixed and variable momentum vectors are assumed to be of approximately equal magnitude, the second harmonic momentum vector will fall geometrically between " https://en.wikipedia.org/wiki/Champernowne%20constant,"In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933. For base 10, the number is defined by concatenating representations of successive integers: . Champernowne constants can also be constructed in other bases, similarly, for example: . The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits: More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order. For instance, the binary Champernowne sequence in shortlex order is where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated. Properties A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. x is said to be normal in base b if its digits in base b follow a uniform distribution. If we denote a digit string as [a0, a1, …], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], …, [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number. Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases . Kurt Mahler showed that the constant is transcendental. The irrationality measure of is , and more generally for any base . The Champernowne word is a disjunctive sequence. Series The definition of the Champernowne constant immediately gives rise to an infinite series representation invol" https://en.wikipedia.org/wiki/7-Chlorokynurenic%20acid,"7-Chlorokynurenic acid (7-CKA) is a tool compound that acts as a potent and selective competitive antagonist of the glycine site of the NMDA receptor. It produces ketamine-like rapid antidepressant effects in animal models of depression. However, 7-CKA is unable to cross the blood-brain-barrier, and for this reason, is unsuitable for clinical use. As a result, a centrally-penetrant prodrug of 7-CKA, 4-chlorokynurenine (AV-101), has been developed for use in humans, and is being studied in clinical trials as a potential treatment for major depressive disorder, and anti-nociception. In addition to antagonizing the NMDA receptor, 7-CKA also acts as a potent inhibitor of the reuptake of glutamate into synaptic vesicles (or as a vesicular glutamate reuptake inhibitor), an action that it mediates via competitive blockade of vesicular glutamate transporters (Ki = 0.59 mM). See also 5,7-Dichlorokynurenic acid Evans blue Kynurenic acid Xanthurenic acid" https://en.wikipedia.org/wiki/Karp%27s%2021%20NP-complete%20problems,"In computational complexity theory, Karp's 21 NP-complete problems are a set of computational problems which are NP-complete. In his 1972 paper, ""Reducibility Among Combinatorial Problems"", Richard Karp used Stephen Cook's 1971 theorem that the boolean satisfiability problem is NP-complete (also called the Cook-Levin theorem) to show that there is a polynomial time many-one reduction from the boolean satisfiability problem to each of 21 combinatorial and graph theoretical computational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughout computer science are computationally intractable, and it drove interest in the study of NP-completeness and the P versus NP problem. The problems Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example, Knapsack was shown to be NP-complete by reducing Exact cover to Knapsack. Satisfiability: the boolean satisfiability problem for formulas in conjunctive normal form (often referred to as SAT) 0–1 integer programming (A variation in which only the restrictions must be satisfied, with no optimization) Clique (see also independent set problem) Set packing Vertex cover Set covering Feedback node set Feedback arc set Directed Hamilton circuit (Karp's name, now usually called Directed Hamiltonian cycle) Undirected Hamilton circuit (Karp's name, now usually called Undirected Hamiltonian cycle) Satisfiability with at most 3 literals per clause (equivalent to 3-SAT) Chromatic number (also called the Graph Coloring Problem) Clique cover Exact cover Hitting set Steiner tree 3-dimensional matching Knapsack (Karp's definition of Knapsack is closer to Subset sum) Job sequencing Partition Max cut Approximations As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can " https://en.wikipedia.org/wiki/Memory%20controller,"A memory controller is a digital circuit that manages the flow of data going to and from a computer's main memory. A memory controller can be a separate chip or integrated into another chip, such as being placed on the same die or as an integral part of a microprocessor; in the latter case, it is usually called an integrated memory controller (IMC). A memory controller is sometimes also called a memory chip controller (MCC) or a memory controller unit (MCU). Memory controllers contain the logic necessary to read and write to DRAM, and to ""refresh"" the DRAM. Without constant refreshes, DRAM will lose the data written to it as the capacitors leak their charge within a fraction of a second. Some memory controllers include error detection and correction hardware. A common form of memory controller is the memory management unit (MMU) which in many operating systems implements virtual addressing. History Most modern desktop or workstation microprocessors use an integrated memory controller (IMC), including microprocessors from Intel, AMD, and those built around the ARM architecture. Prior to K8 (circa 2003), AMD microprocessors had a memory controller implemented on their motherboard's northbridge. In K8 and later, AMD employed an integrated memory controller. Likewise, until Nehalem (circa 2008), Intel microprocessors used memory controllers implemented on the motherboard's northbridge. Nehalem and later switched to an integrated memory controller. Other examples of microprocessors that use integrated memory controllers include NVIDIA's Fermi, IBM's POWER5, and Sun Microsystems's UltraSPARC T1. While an integrated memory controller has the potential to increase the system's performance, such as by reducing memory latency, it locks the microprocessor to a specific type (or types) of memory, forcing a redesign in order to support newer memory technologies. When DDR2 SDRAM was introduced, AMD released new Athlon 64 CPUs. These new models, with a DDR2 controller, us" https://en.wikipedia.org/wiki/Biocommunication%20%28science%29,"In the study of the biological sciences, biocommunication is any specific type of communication within (intraspecific) or between (interspecific) species of plants, animals, fungi, protozoa and microorganisms. Communication basically means sign-mediated interactions following three levels of (syntactic, pragmatic and semantic) rules. Signs in most cases are chemical molecules (semiochemicals), but also tactile, or as in animals also visual and auditive. Biocommunication of animals may include vocalizations (as between competing bird species), or pheromone production (as between various species of insects), chemical signals between plants and animals (as in tannin production used by vascular plants to warn away insects), and chemically mediated communication between plants and within plants. Biocommunication of fungi demonstrates that mycelia communication integrates interspecific sign-mediated interactions between fungal organisms soil bacteria and plant root cells without which plant nutrition could not be organized. Biocommunication of Ciliates identifies the various levels and motifs of communication in these unicellular eukaryotes. Biocommunication of Archaea represents keylevels of sign-mediated interactions in the evolutionarily oldest akaryotes. Biocommunication of Phages demonstrates that the most abundant living agents on this planet coordinate and organize by sign-mediated interactions. Biocommunication is the essential tool to coordinate behavior of various cell types of immune systems. Biocommunication, biosemiotics and linguistics Biocommunication theory may be considered to be a branch of biosemiotics. Whereas Biosemiotics studies the production and interpretation of signs and codes, biocommunication theory investigates concrete interactions mediated by signs. Accordingly, syntactic, semantic, and pragmatic aspects of biocommunication processes are distinguished. Biocommunication specific to animals (animal communication) is considered a branch of" https://en.wikipedia.org/wiki/Load%E2%80%93store%20unit,"In computer engineering, a load–store unit (LSU) is a specialized execution unit responsible for executing all load and store instructions, generating virtual addresses of load and store operations and loading data from memory or storing it back to memory from registers. The load–store unit usually includes a queue which acts as a waiting area for memory instructions, and the unit itself operates independently of other processor units. Load–store units may also be used in vector processing, and in such cases the term ""load–store vector"" may be used. Some load–store units are also capable of executing simple fixed-point and/or integer operations. See also Address-generation unit Arithmetic–logic unit Floating-point unit Load–store architecture" https://en.wikipedia.org/wiki/Sodium%20in%20biology,"Sodium ions () are necessary in small amounts for some types of plants, but sodium as a nutrient is more generally needed in larger amounts by animals, due to their use of it for generation of nerve impulses and for maintenance of electrolyte balance and fluid balance. In animals, sodium ions are necessary for the aforementioned functions and for heart activity and certain metabolic functions. The health effects of salt reflect what happens when the body has too much or too little sodium. Characteristic concentrations of sodium in model organisms are: 10 mM in E. coli, 30 mM in budding yeast, 10 mM in mammalian cell and 100 mM in blood plasma. Sodium distribution in species Humans The minimum physiological requirement for sodium is between 115 and 500 mg per day depending on sweating due to physical activity, and whether the person is adapted to the climate. Sodium chloride is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Adequate Intake for sodium is 1.2 to 1.5 g per day, but on average people in the United States consume 3.4 g per day, the minimum amount that promotes hypertension. Note that salt contains about 39.3% sodium by massthe rest being chlorine and other trace chemicals; thus the Tolerable Upper Intake Level of 2.3 g sodium would be about 5.9 g of salt—about 1 teaspoon. The average daily excretion of sodium is between 40 and 220 mEq. Normal serum sodium levels are between approximately 135 and 145 mEq/L (135 to 145 mmol/L). A serum sodium level of less than 135 mEq/L qualifies as hyponatremia, which is considered severe when the serum sodium level is below 125 mEq/L. The renin–angiotensin system and the atrial natriuretic peptide indirectly regulate the amount of signal transduction in the human central nervous system, which depends on sodium ion motion across the nerve cell membrane, in all nerves. Sodium is thus important in neuron " https://en.wikipedia.org/wiki/Proof%20without%20words,"In mathematics, a proof without words (or visual proof) is an illustration of an identity or mathematical statement which can be demonstrated as self-evident by a diagram without any accompanying explanatory text. Such proofs can be considered more elegant than formal or mathematically rigorous proofs due to their self-evident nature. When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable. A proof without words is not the same as a mathematical proof, because it omits the details of the logical argument it illustrates. However, it can provide valuable intuitions to the viewer that can help them formulate or better understand a true proof. Examples Sum of odd numbers The statement that the sum of all positive odd numbers up to 2n − 1 is a perfect square—more specifically, the perfect square n2—can be demonstrated by a proof without words. In one corner of a grid, a single block represents 1, the first square. That can be wrapped on two sides by a strip of three blocks (the next odd number) to make a 2 × 2 block: 4, the second square. Adding a further five blocks makes a 3 × 3 block: 9, the third square. This process can be continued indefinitely. Pythagorean theorem The Pythagorean theorem that can be proven without words. One method of doing so is to visualise a larger square of sides , with four right-angled triangles of sides , and in its corners, such that the space in the middle is a diagonal square with an area of . The four triangles can be rearranged within the larger square to split its unused space into two squares of and . Jensen's inequality Jensen's inequality can also be proven graphically. A dashed curve along the X axis is the hypothetical distribution of X, while a dashed curve along the Y axis is the corresponding distribution of Y values. The convex mapping Y(X) increasingly ""stretches"" the distribution for increasing values of X. Usage Mathematics Magazine and the College Math" https://en.wikipedia.org/wiki/Integrated%20circuit%20layout,"In integrated circuit design, integrated circuit (IC) layout, also known IC mask layout or mask design, is the representation of an integrated circuit in terms of planar geometric shapes which correspond to the patterns of metal, oxide, or semiconductor layers that make up the components of the integrated circuit. Originally the overall process was called tapeout, as historically early ICs used graphical black crepe tape on mylar media for photo imaging (erroneously believed to reference magnetic data—the photo process greatly predated magnetic media). When using a standard process—where the interaction of the many chemical, thermal, and photographic variables is known and carefully controlled—the behaviour of the final integrated circuit depends largely on the positions and interconnections of the geometric shapes. Using a computer-aided layout tool, the layout engineer—or layout technician—places and connects all of the components that make up the chip such that they meet certain criteria—typically: performance, size, density, and manufacturability. This practice is often subdivided between two primary layout disciplines: analog and digital. The generated layout must pass a series of checks in a process known as physical verification. The most common checks in this verification process are Design rule checking (DRC), Layout versus schematic (LVS), parasitic extraction, antenna rule checking, and electrical rule checking (ERC). When all verification is complete, layout post processing is applied where the data is also translated into an industry-standard format, typically GDSII, and sent to a semiconductor foundry. The milestone completion of the layout process of sending this data to the foundry is now colloquially called ""tapeout"". The foundry converts the data into mask data and uses it to generate the photomasks used in a photolithographic process of semiconductor device fabrication. In the earlier, simpler, days of IC design, layout was done by" https://en.wikipedia.org/wiki/Replicate%20%28biology%29,"In the biological sciences, replicates are an experimental units that are treated identically. Replicates are an essential component of experimental design because they provide an estimate of between sample error. Without replicates, scientists are unable to assess whether observed treatment effects are due to the experimental manipulation or due to random error. There are also analytical replicates which is when an exact copy of a sample is analyzed, such as a cell, organism or molecule, using exactly the same procedure. This is done in order to check for analytical error. In the absence of this type of error replicates should yield the same result. However, analytical replicates are not independent and cannot be used in tests of the hypothesis because they are still the same sample. See also Self-replication Fold change" https://en.wikipedia.org/wiki/Plate%20notation,"In Bayesian inference, plate notation is a method of representing variables that repeat in a graphical model. Instead of drawing each repeated variable individually, a plate or rectangle is used to group variables into a subgraph that repeat together, and a number is drawn on the plate to represent the number of repetitions of the subgraph in the plate. The assumptions are that the subgraph is duplicated that many times, the variables in the subgraph are indexed by the repetition number, and any links that cross a plate boundary are replicated once for each subgraph repetition. Example In this example, we consider Latent Dirichlet allocation, a Bayesian network that models how documents in a corpus are topically related. There are two variables not in any plate; α is the parameter of the uniform Dirichlet prior on the per-document topic distributions, and β is the parameter of the uniform Dirichlet prior on the per-topic word distribution. The outermost plate represents all the variables related to a specific document, including , the topic distribution for document i. The M in the corner of the plate indicates that the variables inside are repeated M times, once for each document. The inner plate represents the variables associated with each of the words in document i: is the topic distribution for the jth word in document i, and is the actual word used. The N in the corner represents the repetition of the variables in the inner plate times, once for each word in document i. The circle representing the individual words is shaded, indicating that each is observable, and the other circles are empty, indicating that the other variables are latent variables. The directed edges between variables indicate dependencies between the variables: for example, each depends on and β. Extensions A number of extensions have been created by various authors to express more information than simply the conditional relationships. However, few of these have become s" https://en.wikipedia.org/wiki/Commutative%20diagram,"In mathematics, and especially in category theory, a commutative diagram is a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra. Description A commutative diagram often consists of three parts: objects (also known as vertices) morphisms (also known as arrows or edges) paths or composites Arrow symbols In algebra texts, the type of morphism can be denoted with different arrow usages: A monomorphism may be labeled with a or a . An epimorphism may be labeled with a . An isomorphism may be labeled with a . The dashed arrow typically represents the claim that the indicated morphism exists (whenever the rest of the diagram holds); the arrow may be optionally labeled as . If the morphism is in addition unique, then the dashed arrow may be labeled or . The meanings of different arrows are not entirely standardized: the arrows used for monomorphisms, epimorphisms, and isomorphisms are also used for injections, surjections, and bijections, as well as the cofibrations, fibrations, and weak equivalences in a model category. Verifying commutativity Commutativity makes sense for a polygon of any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative. Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result. Examples Example 1 In the left diagram, which expresses the first isomorphism theorem, commutativity of the triangle means that . In the right diagram, commutativity of the square means . Example 2 In order for the diagram below to commute, three equalities must be satisfied: Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally d" https://en.wikipedia.org/wiki/Interposer,"An interposer is an electrical interface routing between one socket or connection to another. The purpose of an interposer is to spread a connection to a wider pitch or to reroute a connection to a different connection. Interposer comes from the Latin word ""interpōnere"", meaning ""to put between"". They are often used in BGA packages, multi-chip modules and high bandwidth memory. A common example of an interposer is an integrated circuit die to BGA, such as in the Pentium II. This is done through various substrates, both rigid and flexible, most commonly FR4 for rigid, and polyimide for flexible. Silicon and glass are also evaluated as an integration method. Interposer stacks are also a widely accepted, cost-effective alternative to 3D ICs. There are already several products with interposer technology in the market, notably the AMD Fiji/Fury GPU, and the Xilinx Virtex-7 FPGA. In 2016, CEA Leti demonstrated their second generation 3D-NoC technology which combines small dies (""chiplets""), fabricated at the FDSOI 28 nm node, on a 65 nm CMOS interposer. Another example of an interposer would be the adapter used to plug a SATA drive into a SAS backplane with redundant ports. While SAS drives have two ports that can be used to connect to redundant paths or storage controllers, SATA drives only have a single port. Directly, they can only connect to a single controller or path. SATA drives can be connected to nearly all SAS backplanes without adapters, but using an interposer with a port switching logic allows providing path redundancy. See also Die preparation Integrated circuit Semiconductor fabrication" https://en.wikipedia.org/wiki/Log%20analysis,"In computer log management and intelligence, log analysis (or system and network log analysis) is an art and science seeking to make sense of computer-generated records (also called log or audit trail records). The process of creating such records is called data logging. Typical reasons why people perform log analysis are: Compliance with security policies Compliance with audit or regulation System troubleshooting Forensics (during investigations or in response to a subpoena) Security incident response Understanding online user behavior Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector. Log messages must usually be interpreted concerning the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login, or a systems error). Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as a search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. The terminology may also vary; for example, the authentication of a user to an application may be described as a log in, a logon, a user connection or an authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration to make useful comparisons to messages from different log sources. Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages to understand the complete domain from which the messages must be interpreted. A log analyst may map varying terminology from different log sources into a uni" https://en.wikipedia.org/wiki/Oxford%20Dictionary%20of%20Biology,"Oxford Dictionary of Biology (often abbreviated to ODB) is a multiple editions dictionary published by the English Oxford University Press. With more than 5,500 entries, it contains comprehensive information in English on topics relating to biology, biophysics, and biochemistry. The first edition was published in 1985 as A Concise Dictionary of Biology. The seventh edition, A Dictionary of Biology, was published in 2015 and it was edited by Robert Hine and Elizabeth Martin. Robert Hine studied at King's College London and University of Aberdeen and since 1984 he has contributed to numerous journals and books. Digital and on-line availability The sixth and seventh editions of the ODB are available online for members of subscribed institutions and for subscribed individuals via Oxford Reference. Editions The first edition of Oxford Dictionary of Biology was first published in 1985 and the seventh edition in 2015." https://en.wikipedia.org/wiki/Almost%20all,"In mathematics, the term ""almost all"" means ""all but a negligible quantity"". More precisely, if is a set, ""almost all elements of "" means ""all elements of but those in a negligible subset of "". The meaning of ""negligible"" depends on the mathematical context; for instance, it can mean finite, countable, or null. In contrast, ""almost no"" means ""a negligible quantity""; that is, ""almost no elements of "" means ""a negligible quantity of elements of "". Meanings in different areas of mathematics Prevalent meaning Throughout mathematics, ""almost all"" is sometimes used to mean ""all (elements of an infinite set) except for finitely many"". This use occurs in philosophy as well. Similarly, ""almost all"" can mean ""all (elements of an uncountable set) except for countably many"". Examples: Almost all positive integers are greater than 1012. Almost all prime numbers are odd (2 is the only exception). Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra). If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x). Meaning in measure theory When speaking about the reals, sometimes ""almost all"" can mean ""all reals except for a null set"". Similarly, if S is some set of reals, ""almost all numbers in S"" can mean ""all numbers in S except for those in a null set"". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to ""all points except for those in a null set"" or ""all points in S except for those in a null set"" (this time, S is a set of points in the space). Even more generally, ""almost all"" is sometimes used in the sense of ""almost everywhere"" in measure theory, or in the closely related sense of ""almost surely"" in probability theory. Examples: In a measure space, such as the real line, countable sets are null. The set of rational numbers is " https://en.wikipedia.org/wiki/Interface%20logic%20model,"In electronics, the interface logic model (ILM) is a technique to model blocks in hierarchal VLSI implementation flow. It is a gate level model of a physical block where only the connections from the inputs to the first stage of flip-flops, and the connections from the last stage of flip-flops to the outputs are in the model, including the flip-flops and the clock tree driving these flip-flops. All other internal flip-flop to flip-flop paths are stripped out of the ILM. The advantage of ILM is that the entire path (clock to clock path) is visible at top level for interface nets, unlike traditional block-based hierarchal implementation flow. This gives better accuracy in analysis for interface nets at negligible additional memory and runtime overhead." https://en.wikipedia.org/wiki/Waveform%20shaping,"Waveform shaping in electronics is the modification of the shape of an electronic waveform. It is in close connection with waveform diversity and waveform design, which are extensively studied in signal processing. Shaping the waveforms are of particular interest in active sensing (radar, sonar) for better detection performance, as well as communication schemes (CDMA, frequency hopping), and biology (for animal stimuli design). See also Modulation, Pulse compression, Spread spectrum, Transmit diversity, Ambiguity function, Autocorrelation, and Cross-correlation. Further reading Hao He, Jian Li, and Petre Stoica. Waveform design for active sensing systems: a computational approach. Cambridge University Press, 2012. Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005. M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014. Nadav Levanon, and Eli Mozeson. Radar signals. Wiley. com, 2004. Jian Li, and Petre Stoica, eds. Robust adaptive beamforming. New Jersey: John Wiley, 2006. Fulvio Gini, Antonio De Maio, and Lee Patton, eds. Waveform design and diversity for advanced radar systems. Institution of engineering and technology, 2012. Mark R. Bell, ""Information theory and radar waveform design."" IEEE Transactions on Information Theory, 39.5 (1993): 1578–1597. Robert Calderbank, S. Howard, and Bill Moran. ""Waveform diversity in radar signal processing."" IEEE Signal Processing Magazine, 26.1 (2009): 32–41. Augusto Aubry, Antonio De Maio, Bo Jiang, and Shuzhong Zhang. ""Ambiguity function shaping for cognitive radar via complex quartic optimization."" IEEE Transactions on Signal Processing 61 (2013): 5603–5619. John J. Benedetto, Ioannis Konstantinidis, and Muralidhar Rangaswamy. ""Phase-coded waveforms and their design."" IEEE Signal Processing " https://en.wikipedia.org/wiki/Nomen%20novum,"In biological nomenclature, a nomen novum (Latin for ""new name""), new replacement name (or replacement name, new substitute name, substitute name) is a scientific name that is created specifically to replace another scientific name, but only when this other name cannot be used for technical, nomenclatural reasons (for example because it is a homonym: it is spelled the same as an existing, older name). It does not apply when a name is changed for taxonomic reasons (representing a change in scientific insight). It is frequently abbreviated, e.g. nomen nov., nom. nov.. Zoology In zoology establishing a new replacement name is a nomenclatural act and it must be expressly proposed to substitute a previously established and available name. Often, the older name cannot be used because another animal was described earlier with exactly the same name. For example, Lindholm discovered in 1913 that a generic name Jelskia established by Bourguignat in 1877 for a European freshwater snail could not be used because another author Taczanowski had proposed the same name in 1871 for a spider. So Lindholm proposed a new replacement name Borysthenia. This is an objective synonym of Jelskia Bourguignat, 1877, because he has the same type species, and is used today as Borysthenia. Also, for names of species new replacement names are often necessary. New replacement names have been proposed since more than 100 years ago. In 1859 Bourguignat saw that the name Bulimus cinereus Mortillet, 1851 for an Italian snail could not be used because Reeve had proposed exactly the same name in 1848 for a completely different Bolivian snail. Since it was understood even then that the older name always has priority, Bourguignat proposed a new replacement name Bulimus psarolenus, and also added a note why this was necessary. The Italian snail is known until today under the name Solatopupa psarolena (Bourguignat, 1859). A new replacement name must obey certain rules; not all of these are well known. " https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics,"A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date. Only or mainly thermodynamics Both thermodynamics and statistical mechanics 2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman 2e (1988) Chichester: Wiley , . (1990) New York: Dover Statistical mechanics . 2e (1936) Cambridge: University Press; (1980) Cambridge University Press. ; (1979) New York: Dover Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press. . 3e (1995) Oxford: Butterworth-Heinemann . 2e (1987) New York: Wiley . 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag , ; (2005) New York: Dover 2e (2000) Sausalito, Calif.: University Science 2e (1998) Chichester: Wiley Specialized topics Kinetic theory Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon , Quantum statistical mechanics Mathematics of statistical mechanics Translated by G. Gamow (1949) New York: Dover . Reissued (1974), (1989); (1999) Singapore: World Scientific ; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press Miscellaneous (available online here) Historical (1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover Translated by J. Kestin (1956) New York: Academic Press. German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover See also List of textbooks on classical mechanics and quantum mechanics List of textbooks in electromagnetism List of books on general relativity Further reading" https://en.wikipedia.org/wiki/Bit%20banging,"In computer engineering and electrical engineering, bit banging is a ""term of art"" for any method of data transmission that employs software as a substitute for dedicated hardware to generate transmitted signals or process received signals. Software directly sets and samples the states of GPIOs (e.g., pins on a microcontroller), and is responsible for meeting all timing requirements and protocol sequencing of the signals. In contrast to bit banging, dedicated hardware (e.g., UART, SPI, I²C) satisfies these requirements and, if necessary, provides a data buffer to relax software timing requirements. Bit banging can be implemented at very low cost, and is commonly used in some embedded systems. Bit banging allows a device to implement different protocols with minimal or no hardware changes. In some cases, bit banging is made feasible by newer, faster processors because more recent hardware operates much more quickly than hardware did when standard communications protocols were created. C code example The following C language code example transmits a byte of data on an SPI bus. // transmit byte serially, MSB first void send_8bit_serial_data(unsigned char data) { int i; // select device (active low) output_low(SD_CS); // send bits 7..0 for (i = 0; i < 8; i++) { // consider leftmost bit // set line high if bit is 1, low if bit is 0 if (data & 0x80) output_high(SD_DI); else output_low(SD_DI); // pulse the clock state to indicate that bit value should be read output_low(SD_CLK); delay(); output_high(SD_CLK); // shift byte left so next bit will be leftmost data <<= 1; } // deselect device output_high(SD_CS); } Considerations The question whether to deploy bit banging or not is a trade-off between load, performance and reliability on one hand, and the availability of a hardware alternative on the other. The software emulation process consumes more " https://en.wikipedia.org/wiki/Runtime%20application%20self-protection,"Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software. The technology differs from perimeter-based protections such as firewalls, that can only detect and block attacks by using network information without contextual awareness. RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering. RASP-protected applications rely less on external devices like firewalls to provide runtime security protection. When a threat is detected RASP can prevent exploitation and possibly take other actions, including terminating a user's session, shutting the application down, alerting security personnel and sending a warning to the user. RASP aims to close the gap left by application security testing and network perimeter controls, neither of which have enough insight into real-time data and event flows to either prevent vulnerabilities slipping through the review process or block new threats that were unforeseen during development. Implementation RASP can be integrated as a framework or module that runs in conjunction with a program's codes, libraries and system calls. The technology can also be implemented as a virtualization. RASP is similar to interactive application security testing (IAST), the key difference is that IAST is focused on identifying vulnerabilities within the applications and RASPs are focused protecting against cybersecurity attacks that may take advantages of those vulnerabilities or other attack vectors. Deployment options RASP solutions can be deployed in two different ways: monitor or protection mode. In monitor mode, the RASP solution reports on web application attacks but does not block any attack. In protection mode, the RASP solution reports and blocks web a" https://en.wikipedia.org/wiki/Packet%20processing,"In digital communications networks, packet processing refers to the wide variety of algorithms that are applied to a packet of data or information as it moves through the various network elements of a communications network. With the increased performance of network interfaces, there is a corresponding need for faster packet processing. There are two broad classes of packet processing algorithms that align with the standardized network subdivision of control plane and data plane. The algorithms are applied to either: Control information contained in a packet which is used to transfer the packet safely and efficiently from origin to destination or The data content (frequently called the payload) of the packet which is used to provide some content-specific transformation or take a content-driven action. Within any network enabled device (e.g. router, switch, network element or terminal such as a computer or smartphone) it is the packet processing subsystem that manages the traversal of the multi-layered network or protocol stack from the lower, physical and network layers all the way through to the application layer. History The history of packet processing is the history of the Internet and packet switching. Packet processing milestones include: 1962–1968: Early research into packet switching 1969: 1st two nodes of ARPANET connected; 15 sites connected by end of 1971 with email as a new application 1973: Packet switched voice connections over ARPANET with Network Voice Protocol. File Transfer Protocol (FTP) specified 1974: Transmission Control Protocol (TCP) specified 1979: VoIP – NVP running on early versions of IP 1981: IP and TCP standardized 1982: TCP/IP standardized 1991: World Wide Web (WWW) released by CERN, authored by Tim Berners-Lee 1998: IPv6 first published Historical references and timeline can be found in the External Resources section below. Communications models For networks to succeed it is necessary to have a unifying standard " https://en.wikipedia.org/wiki/Classification%20of%20Fatou%20components,"In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou. Rational case If f is a rational function defined in the extended complex plane, and if it is a nonlinear function (degree > 1) then for a periodic component of the Fatou set, exactly one of the following holds: contains an attracting periodic point is parabolic is a Siegel disc: a simply connected Fatou component on which f(z) is analytically conjugate to a Euclidean rotation of the unit disc onto itself by an irrational rotation angle. is a Herman ring: a double connected Fatou component (an annulus) on which f(z) is analytically conjugate to a Euclidean rotation of a round annulus, again by an irrational rotation angle. Attracting periodic point The components of the map contain the attracting points that are the solutions to . This is because the map is the one to use for finding solutions to the equation by Newton–Raphson formula. The solutions must naturally be attracting fixed points. Herman ring The map and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example. More than one type of component If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component Transcendental case Baker domain In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are ""domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)"" one example of such a function is: Wandering domain Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic. See also No-wandering-domain theorem Montel's theorem John Domains Basins of attraction" https://en.wikipedia.org/wiki/Square%20root%20of%203,"The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as or . It is more precisely called the principal square root of 3 to distinguish it from the negative number with the same property. The square root of 3 is an irrational number. It is also known as Theodorus' constant, after Theodorus of Cyrene, who proved its irrationality. , its numerical value in decimal notation had been computed to at least ten billion digits. Its decimal expansion, written here to 65 decimal places, is given by : The fraction (...) can be used as a good approximation. Despite having a denominator of only 56, it differs from the correct value by less than (approximately , with a relative error of ). The rounded value of is correct to within 0.01% of the actual value. The fraction (...) is accurate to . Archimedes reported a range for its value: . The lower limit is an accurate approximation for to (six decimal places, relative error ) and the upper limit to (four decimal places, relative error ). Expressions It can be expressed as the continued fraction . So it is true to say: then when : It can also be expressed by generalized continued fractions such as which is evaluated at every second term. Geometry and trigonometry The square root of 3 can be found as the leg length of an equilateral triangle that encompasses a circle with a diameter of 1. If an equilateral triangle with sides of length 1 is cut into two equal halves, by bisecting an internal angle across to make a right angle with one side, the right angle triangle's hypotenuse is length one, and the sides are of length and . From this, , , and . The square root of 3 also appears in algebraic expressions for various other trigonometric constants, including the sines of 3°, 12°, 15°, 21°, 24°, 33°, 39°, 48°, 51°, 57°, 66°, 69°, 75°, 78°, 84°, and 87°. It is the distance between parallel sides of a regular hexagon with sides of " https://en.wikipedia.org/wiki/Microbivory,"Microbivory (adj. microbivorous, microbivore) is a feeding behavior consisting of eating microbes (especially bacteria) practiced by animals of the mesofauna, microfauna and meiofauna. Microbivorous animals include some soil nematodes, springtails or flies such as Drosophila sharpi. A well known example of microbivorous nematodes is the model roundworm Caenorhabditis elegans which is maintained in culture in labs on agar plates, fed with the 'OP50' Escherichia coli strain of bacteria. In food webs of ecosystems, microbivores can be distinguished from detritivores, generally thought playing the roles of decomposers, as they don't consume decaying dead matter but only living microorganisms. Use of term in robotics There is also use of the term 'microbivore' to qualify the concept of robots autonomously finding their energy in the production of bacteria. Robert Freitas has also proposed microbivore robots that would attack pathogens in the manner of white blood cells. See also Bacterivore" https://en.wikipedia.org/wiki/Biological%20constraints,"Biological constraints are factors which make populations resistant to evolutionary change. One proposed definition of constraint is ""A property of a trait that, although possibly adaptive in the environment in which it originally evolved, acts to place limits on the production of new phenotypic variants."" Constraint has played an important role in the development of such ideas as homology and body plans. Types of constraint Any aspect of an organism that has not changed over a certain period of time could be considered to provide evidence for ""constraint"" of some sort. To make the concept more useful, it is therefore necessary to divide it into smaller units. First, one can consider the pattern of constraint as evidenced by phylogenetic analysis and the use of phylogenetic comparative methods; this is often termed phylogenetic inertia, or phylogenetic constraint. It refers to the tendency of related taxa sharing traits based on phylogeny. Charles Darwin spoke of this concept in his 1859 book ""On the Origin of Species"", as being ""Unity of Type"" and went on to explain the phenomenon as existing because organisms do not start over from scratch, but have characteristics that are built upon already existing ones that were inherited from their ancestors; and these characteristics likely limit the amount of evolution seen in that new taxa due to these constraints. If one sees particular features of organisms that have not changed over rather long periods of time (many generations), then this could suggest some constraint on their ability to change (evolve). However, it is not clear that mere documentation of lack of change in a particular character is good evidence for constraint in the sense of the character being unable to change. For example, long-term stabilizing selection related to stable environments might cause stasis. It has often been considered more fruitful, to consider constraint in its causal sense: what are the causes of lack of change? Stabilizing s" https://en.wikipedia.org/wiki/List%20of%20sums%20of%20reciprocals,"In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally the sum of unit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the first n of them are summed, then one more is included to give the sum of the first n+1 of them, etc. If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer. For an infinite series of reciprocals, the issues are twofold: First, does the sequence of sums diverge—that is, does it eventually exceed any given number—or does it converge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to be large if the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that value rational or irrational, and is that value algebraic or transcendental? Finitely many terms The harmonic mean of a set of positive integers is the number of numbers times the reciprocal of the sum of their reciprocals. The optic equation requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c. All solutions are given by a = mn + m2, b = mn + n2, c = mn. This equation appears in various contexts in elementary geometry. The Fermat–Catalan conjecture concerns a certain Diophantine equation, equating the sum of two terms, each a positive integer raised to a positive integer power, to a third term that is also a positive integer raised to a positive integer power (with the base integers having no prime factor in common). The conjecture asks whether the equation has an infi" https://en.wikipedia.org/wiki/Perturbation%20theory,"In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into ""solvable"" and ""perturbative"" parts. In perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction. Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines. Description Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some ""small"" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ), like the following: In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively small" https://en.wikipedia.org/wiki/Individuation,"The principle of individuation, or , describes the manner in which a thing is identified as distinct from other things. The concept appears in numerous fields and is encountered in works of Leibniz, Carl Jung, Gunther Anders, Gilbert Simondon, Bernard Stiegler, Friedrich Nietzsche, Arthur Schopenhauer, David Bohm, Henri Bergson, Gilles Deleuze, and Manuel DeLanda. Usage The word individuation occurs with different meanings and connotations in different fields. In philosophy Philosophically, ""individuation"" expresses the general idea of how a thing is identified as an individual thing that ""is not something else"". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time. In Jungian psychology In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption. In the news industry The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users. Com" https://en.wikipedia.org/wiki/Davydov%20soliton,"In quantum biology, the Davydov soliton (after the Soviet Ukrainian physicist Alexander Davydov) is a quasiparticle representing an excitation propagating along the self-trapped amide I groups within the α-helices of proteins. It is a solution of the Davydov Hamiltonian. The Davydov model describes the interaction of the amide I vibrations with the hydrogen bonds that stabilize the α-helices of proteins. The elementary excitations within the α-helix are given by the phonons which correspond to the deformational oscillations of the lattice, and the excitons which describe the internal amide I excitations of the peptide groups. Referring to the atomic structure of an α-helix region of protein the mechanism that creates the Davydov soliton (polaron, exciton) can be described as follows: vibrational energy of the C=O stretching (or amide I) oscillators that is localized on the α-helix acts through a phonon coupling effect to distort the structure of the α-helix, while the helical distortion reacts again through phonon coupling to trap the amide I oscillation energy and prevent its dispersion. This effect is called self-localization or self-trapping. Solitons in which the energy is distributed in a fashion preserving the helical symmetry are dynamically unstable, and such symmetrical solitons once formed decay rapidly when they propagate. On the other hand, an asymmetric soliton which spontaneously breaks the local translational and helical symmetries possesses the lowest energy and is a robust localized entity. Davydov Hamiltonian Davydov Hamiltonian is formally similar to the Fröhlich-Holstein Hamiltonian for the interaction of electrons with a polarizable lattice. Thus the Hamiltonian of the energy operator is where is the exciton Hamiltonian, which describes the motion of the amide I excitations between adjacent sites; is the phonon Hamiltonian, which describes the vibrations of the lattice; and is the interaction Hamiltonian, which describes the interaction" https://en.wikipedia.org/wiki/Pre-charge,"Pre-charge of the powerline voltages in a high voltage DC application is a preliminary mode which limits the inrush current during the power up procedure. A high-voltage system with a large capacitive load can be exposed to high electric current during initial turn-on. This current, if not limited, can cause considerable stress or damage to the system components. In some applications, the occasion to activate the system is a rare occurrence, such as in commercial utility power distribution. In other systems such as vehicle applications, pre-charge will occur with each use of the system, multiple times per day. Precharging is implemented to increase the lifespan of electronic components and increase reliability of the high voltage system. Background: inrush currents into capacitors Inrush currents into capacitive components are a key concern in power-up stress to components. When DC input power is applied to a capacitive load, the step response of the voltage input will cause the input capacitor to charge. The capacitor charging starts with an inrush current and ends with an exponential decay down to the steady state condition. When the magnitude of the inrush peak is very large compared to the maximum rating of the components, then component stress is to be expected. The current into a capacitor is known to be : the peak inrush current will depend upon the capacitance C and the rate of change of the voltage (dV/dT). The inrush current will increase as the capacitance value increases, and the inrush current will increase as the voltage of the power source increases. This second parameter is of primary concern in high voltage power distribution systems. By their nature, high voltage power sources will deliver high voltage into the distribution system. Capacitive loads will then be subject to high inrush currents upon power-up. The stress to the components must be understood and minimized. The objective of a pre-charge function is to limit the magnitude of the inru" https://en.wikipedia.org/wiki/Hand-waving,"Hand-waving (with various spellings) is a pejorative label for attempting to be seen as effective – in word, reasoning, or deed – while actually doing nothing effective or substantial. It is often applied to debating techniques that involve fallacies, misdirection and the glossing over of details. It is also used academically to indicate unproven claims and skipped steps in proofs (sometimes intentionally, as in lectures and instructional materials), with some specific meanings in particular fields, including literary criticism, speculative fiction, mathematics, logic, science and engineering. The term can additionally be used in work situations, when attempts are made to display productivity or assure accountability without actually resulting in them. The term can also be used as a self-admission of, and suggestion to defer discussion about, an allegedly unimportant weakness in one's own argument's evidence, to forestall an opponent dwelling on it. In debate competition, certain cases of this form of hand-waving may be explicitly permitted. Hand-waving is an idiomatic metaphor, derived in part from the use of excessive gesticulation, perceived as unproductive, distracting or nervous, in communication or other effort. The term also evokes the sleight-of-hand distraction techniques of stage magic, and suggests that the speaker or writer seems to believe that if they, figuratively speaking, simply wave their hands, no one will notice or speak up about the holes in the reasoning. This implication of misleading intent has been reinforced by the pop-culture influence of the Star Wars franchise, in which mystically powerful hand-waving is fictionally used for mind control, and some uses of the term in public discourse are explicit Star Wars references. Actual hand-waving motions may be used either by a speaker to indicate a desire to avoid going into details, or by critics to indicate that they believe the proponent of an argument is engaging in a verbal hand-wave in" https://en.wikipedia.org/wiki/Del,"Del, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations); the divergence of a vector field; or the curl (rotation) of a vector field. Del is a very convenient mathematical notation for those three operations (gradient, divergence, and curl) that makes many equations easier to write and remember. The del symbol (or nabla) can be formally defined as a three-dimensional vector operator whose three components are the corresponding partial derivative operators. As a vector operator, it can act on scalar and vector fields in three different ways, giving rise to three different differential operations: first, it can act on scalar fields by a ""formal"" scalar multiplication—to give a vector field called the gradient; second, it can act on vector fields by a ""formal"" dot product—to give a scalar field called the divergence; and lastly, it can act on vector fields by a ""formal"" cross product—to give a vector field called the curl. These ""formal"" products do not necessarily commute with other operators or products. These three uses, detailed below, are summarized as: Gradient: Divergence: Curl: Definition In the Cartesian coordinate system with coordinates and standard basis , del is a vector operator whose components are the partial derivative operators ; that is, Where the expression in parentheses is a row vector. In three-dimensional Cartesian coordinate system with coordinates and standard basis or unit vectors of axes , del is written as As" https://en.wikipedia.org/wiki/Hardware-in-the-loop%20simulation,"Hardware-in-the-loop (HIL) simulation, HWIL, or HITL, is a technique that is used in the development and testing of complex real-time embedded systems. HIL simulation provides an effective testing platform by adding the complexity of the process-actuator system, known as a plant, to the test platform. The complexity of the plant under control is included in testing and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the ""plant simulation"". The embedded system to be tested interacts with this plant simulation. How HIL works HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation. For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation: Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw; Dynamics of the brake system's hydraulic components; Road characteristics. Uses In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and testing efficiency is typically a formula that includes the following factors: 1. Cost 2. Duration 3. Safety 4. Feasibility The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for " https://en.wikipedia.org/wiki/Electrochromatography,"Electrochromatography is a chemical separation technique in analytical chemistry, biochemistry and molecular biology used to resolve and separate mostly large biomolecules such as proteins. It is a combination of size exclusion chromatography (gel filtration chromatography) and gel electrophoresis. These separation mechanisms operate essentially in superposition along the length of a gel filtration column to which an axial electric field gradient has been added. The molecules are separated by size due to the gel filtration mechanism and by electrophoretic mobility due to the gel electrophoresis mechanism. Additionally there are secondary chromatographic solute retention mechanisms. Capillary electrochromatography Capillary electrochromatography (CEC) is an electrochromatography technique in which the liquid mobile phase is driven through a capillary containing the chromatographic stationary phase by electroosmosis. It is a combination of high-performance liquid chromatography and capillary electrophoresis. The capillaries is packed with HPLC stationary phase and a high voltage is applied to achieve separation is achieved by electrophoretic migration of the analyte and differential partitioning in the stationary phase. See also Chromatography Protein electrophoresis Electrofocusing Two-dimensional gel electrophoresis Temperature gradient gel electrophoresis" https://en.wikipedia.org/wiki/List%20of%20homological%20algebra%20topics,"This is a list of homological algebra topics, by Wikipedia page. Basic techniques Cokernel Exact sequence Chain complex Differential module Five lemma Short five lemma Snake lemma Nine lemma Extension (algebra) Central extension Splitting lemma Projective module Injective module Projective resolution Injective resolution Koszul complex Exact functor Derived functor Ext functor Tor functor Filtration (abstract algebra) Spectral sequence Abelian category Triangulated category Derived category Applications Group cohomology Galois cohomology Lie algebra cohomology Sheaf cohomology Whitehead problem Homological conjectures in commutative algebra Homological algebra" https://en.wikipedia.org/wiki/Lorenzo%27s%20oil,"Lorenzo’s oil is liquid solution, made of 4 parts glycerol trioleate and 1 part glycerol trierucate, which are the triacylglycerol forms of oleic acid and erucic acid. It is prepared from olive oil and rapeseed oil. It is used in the investigational treatment of asymptomatic patients with adrenoleukodystrophy (ALD), a nervous system disorder. The development of the oil was led by Augusto and Michaela Odone after their son Lorenzo was diagnosed with the disease in 1984, at the age of five. Lorenzo was predicted to die within a few years. His parents sought experimental treatment options, and the initial formulation of the oil was developed by retired British scientist Don Suddaby (formerly of Croda International). Suddaby and his colleague, Keith Coupland, received U.S. Patent No. 5,331,009 for the oil. The royalties received by Augusto were paid to The Myelin Project which he and Michaela founded to further research treatments for ALD and similar disorders. The Odones and their invention obtained widespread publicity in 1992 because of the film Lorenzo's Oil. Research on the effectiveness of Lorenzo's Oil has seen mixed results, with possible benefit for asymptomatic ALD patients but of unpredictable or no benefit to those with symptoms, suggesting its possible role as a preventative measure in families identified as ALD dominant. Lorenzo Odone died on May 30, 2008, at the age of 30; he was bedridden with paralysis and died from aspiration pneumonia, likely caused by having inhaled food. Treatment costs Lorenzo's oil costs approximately $400 USD for a month's treatment. Proposed mechanism of action The mixture of fatty acids purportedly reduces the levels of very long chain fatty acids (VLCFAs), which are elevated in ALD. It does so by competitively inhibiting the enzyme that forms VLCFAs. Effectiveness Lorenzo's oil, in combination with a diet low in VLCFA, has been investigated for its possible effects on the progression of ALD. Clinical results have been mi" https://en.wikipedia.org/wiki/Noether%27s%20theorem,"Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space. Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law. Briefly, the relationships between symmetries and conservation laws are as follows: 1) Uniformity of space distance-wise ⟹ conservation of linear momentum; 2) Isotropy of space direction-wise ⟹ conservation of angular momentum; 3) Uniformity of time ⟹ conservation of energy Basic illustrations and background As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that " https://en.wikipedia.org/wiki/NAT%20traversal%20with%20session%20border%20controllers,"Network address translators (NAT) are used to overcome the lack of IPv4 address availability by hiding an enterprise or even an operator's network behind one or few IP addresses. The devices behind the NAT use private IP addresses that are not routable in the public Internet. The Session Initiation Protocol (SIP) has established itself as the de facto standard for voice over IP (VoIP) communication. In order to establish a call, a caller sends a SIP message, which contains its own IP address. The callee is supposed to reply back with a SIP message destined to the IP addresses included in the received SIP message. This will obviously not work if the caller is behind a NAT and is using a private IP address. Probably the single biggest mistake in SIP design was ignoring the existence of NATs. This error came from a belief in IETF leadership that IP address space would be exhausted more rapidly and would necessitate global upgrade to IPv6 and eliminate the need for NATs. The SIP standard has assumed that NATs do not exist, an assumption, which turned out to be a failure. SIP simply didn't work for the majority of Internet users who are behind NATs. At the same time it became apparent that the standardization life-cycle is slower than how the market ticks: Session Border Controllers (SBC) were born, and began to fix what the standards failed to do: NAT traversal. In case a user agent is located behind a NAT then it will use a private IP address as its contact address in the contact and via headers as well as the SDP part. This information would then be useless for anyone trying to contact this user agent from the public Internet. There are different NAT traversal solutions such as STUN, TURN and ICE. Which solution to use depends on the behavior of the NAT and the call scenario. When using an SBC to solve the NAT traversal issues the most common approach for SBC is to act as the public interface of the user agents. This is achieved by replacing the user agent's cont" https://en.wikipedia.org/wiki/Undescribed%20taxon,"In taxonomy, an undescribed taxon is a taxon (for example, a species) that has been discovered, but not yet formally described and named. The various Nomenclature Codes specify the requirements for a new taxon to be validly described and named. Until such a description has been published, the taxon has no formal or official name, although a temporary, informal name is often used. A published scientific name may not fulfil the requirements of the Codes for various reasons. For example, if the taxon was not adequately described, its name is called a nomen nudum. It is possible for a taxon to be ""undescribed"" for an extensive period of time, even if unofficial descriptions are published. An undescribed species may be referred to with the genus name, followed by ""sp""., but this abbreviation is also used to label specimens or images that are too incomplete to be identified at the species level. In some cases, there is more than one undescribed species in a genus. In this case, these are often referred to by a number or letter. In the shark genus Pristiophorus, for example, there were, for some time, four undescribed species, informally named Pristiophorus sp. A, B, C and D. (In 2008, sp. A was described as Pristiophorus peroniensis and sp. B as P. delicatus.) When a formal description for species C or D is published, its temporary name will be replaced with a proper binomial name. Provisional names in bacteriology In bacteriology, a valid publication of a name requires the deposition of the bacteria in a Bacteriology Culture Collection. Species for which this is impossible cannot receive a valid binomial name; these species are classified as Candidatus. Provisional names in botany A provisional name for a species may consist of the number or of some other designation of a specimen in a herbarium or other collection. It may also consist of the genus name followed by such a specimen identifier or by a provisional specific epithet which is enclosed by quotation marks. In" https://en.wikipedia.org/wiki/Prime%20constant,"The prime constant is the real number whose th binary digit is 1 if is prime and 0 if is composite or 1. In other words, is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is, where indicates a prime and is the characteristic function of the set of prime numbers. The beginning of the decimal expansion of ρ is: The beginning of the binary expansion is: Irrationality The number can be shown to be irrational. To see why, suppose it were rational. Denote the th digit of the binary expansion of by . Then since is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers and such that for all and all . Since there are an infinite number of primes, we may choose a prime . By definition we see that . As noted, we have for all . Now consider the case . We have , since is composite because . Since we see that is irrational." https://en.wikipedia.org/wiki/List%20of%20recreational%20number%20theory%20topics,"This is a list of recreational number theory topics (see number theory, recreational mathematics). Listing here is not pejorative: many famous topics in number theory have origins in challenging problems posed purely for their own sake. See list of number theory topics for pages dealing with aspects of number theory with more consolidated theories. Number sequences Integer sequence Fibonacci sequence Golden mean base Fibonacci coding Lucas sequence Padovan sequence Figurate numbers Polygonal number Triangular number Square number Pentagonal number Hexagonal number Heptagonal number Octagonal number Nonagonal number Decagonal number Centered polygonal number Centered square number Centered pentagonal number Centered hexagonal number Tetrahedral number Pyramidal number Triangular pyramidal number Square pyramidal number Pentagonal pyramidal number Hexagonal pyramidal number Heptagonal pyramidal number Octahedral number Star number Perfect number Quasiperfect number Almost perfect number Multiply perfect number Hyperperfect number Semiperfect number Primitive semiperfect number Unitary perfect number Weird number Untouchable number Amicable number Sociable number Abundant number Deficient number Amenable number Aliquot sequence Super-Poulet number Lucky number Powerful number Primeval number Palindromic number Telephone number Triangular square number Harmonic divisor number Sphenic number Smith number Double Mersenne number Zeisel number Heteromecic number Niven numbers Superparticular number Highly composite number Highly totient number Practical number Juggler sequence Look-and-say sequence Digits Polydivisible number Automorphic number Armstrong number Self number Harshad number Keith number Kaprekar number Digit sum Persistence of a number Perfect digital invariant Happy number Perfect digit-to-digit invariant Factorion Emirp Palindromic prime Home prime Normal number Stoneham number Champernowne constant Absolutely normal number Repunit Repdigit Prime and r" https://en.wikipedia.org/wiki/Telematic%20control%20unit,"A telematic control unit (TCU) in the automobile industry is the embedded system on board a vehicle that wirelessly connects the vehicle to cloud services or other vehicles via V2X standards over a cellular network. The TCU collects telemetry data from the vehicle, such as position, speed, engine data, connectivity quality, etc., from various sub-systems over data and control busses. It may also provide in-vehicle connectivity via Wifi and Bluetooth and implements the eCall function when applicable. In the automotive domain, a TCU can also be a transmission control unit. A TCU consists of: a satellite navigation (GNSS) unit, which keeps track of the latitude and longitude values of the vehicle; an external interface for mobile communication (GSM, GPRS, Wi-Fi, WiMax, LTE or 5G), which provides the tracked values to a centralized geographical information system (GIS) database server; an electronic processing unit; a microcontroller, microprocessor, or field programmable gate array (FPGA) which processes the information and acts as an interface to the GPS; a mobile communication unit; memory for saving GPS values in mobile-free zones or to intelligently store information about the vehicle's sensor data. battery module See also Telematics Auto parts Embedded systems External links What is a Telematics Control Unit & How does it Work? Automotive telematics control unit (TCU) architecture What is a Telematics Control Unit (TCU)?" https://en.wikipedia.org/wiki/Food%20engineering,"Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field. Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation. History Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food qua" https://en.wikipedia.org/wiki/Washout%20filter,"In signal processing, a washout filter is a stable high pass filter with zero static gain. This leads to the filtering of lower frequency inputs signals, leaving the steady state output unaffected by unwanted low frequency inputs. General Background The common transfer function for a washout filter is: Where is the input variable, is the output of the function for the filter, and the frequency of the filter is set in the denominator. This filter will only produce a non-zero output only during transient periods when the input signal is of higher frequency and not in a constant steady state value. Conversely, the filter will “wash out” sensed input signals that is of lower frequency (constant steady-state signal). [C.K. Wang] Flight Control Application Yaw Control System In modern swept wing aircraft, yaw damping control systems are used to dampen and stabilize the Dutch-roll motion of an aircraft in flight. However, when a pilot inputs a command to yaw the aircraft for maneuvering (such as steady turns), the rudder becomes a single control surface that functions to dampen the Dutch-roll motion and yaw the aircraft. The result is a suppressed yaw rate and more required input from the pilot to counter the suppression. [C.K. Wang] To counter the yaw command suppression, the installation of washout filters before the yaw dampers and rudder actuators will allow the yaw damper feedback loop in the control system to filter out the low frequency signals or state inputs. In the case of a steady turn during flight, the low frequency signal is the pilot command and the washout filter will allow the turn command signal to not be dampened by the yaw damper in the feedback circuit. [C.K. Wang] An example of this use of can be located at Yaw Damper Design for a 747® Jet Aircraft." https://en.wikipedia.org/wiki/Laplace%20limit,"In mathematics, the Laplace limit is the maximum value of the eccentricity for which a solution to Kepler's equation, in terms of a power series in the eccentricity, converges. It is approximately 0.66274 34193 49181 58097 47420 97109 25290. Kepler's equation M = E − ε sin E relates the mean anomaly M with the eccentric anomaly E for a body moving in an ellipse with eccentricity ε. This equation cannot be solved for E in terms of elementary functions, but the Lagrange reversion theorem gives the solution as a power series in ε: or in general Laplace realized that this series converges for small values of the eccentricity, but diverges for any value of M other than a multiple of π if the eccentricity exceeds a certain value that does not depend on M. The Laplace limit is this value. It is the radius of convergence of the power series. It is given by the solution to the transcendental equation No closed-form expression or infinite series is known for the Laplace limit. History Laplace calculated the value 0.66195 in 1827. The Italian astronomer Francesco Carlini found the limit 0.66 five years before Laplace. Cauchy in the 1829 gave the precise value 0.66274. See also Orbital eccentricity" https://en.wikipedia.org/wiki/Linear%20time-invariant%20system,"In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response of the system to an arbitrary input can be found directly using convolution: where is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining ), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers. Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves. Overview The defining properties of any LTI system are linearity and time invariance. Linearity means that the relationship between the input and the output , both being regarded as functions, is a linear mapping: If is a constant then the system output to is ; if is a further input with system output then the output of the" https://en.wikipedia.org/wiki/QualNet,"QualNet is a testing and simulation tool owned and provided by Scalable Network Technologies, Inc. As network simulation software, it acts as a planning, testing, and training tool which mimics the behavior of a physical communications network. See also Network simulation Wireless networking Computer network analysis Computer networking Simulation software" https://en.wikipedia.org/wiki/Dirac%20comb,"In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb (with the s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function. The symbol , where the period is omitted, represents a Dirac comb of unit period. This implies Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel: The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it. Dirac-comb identity The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly , or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta . Formally, this yields (; ) where and In signal processing, this property on one hand allows sampling a function by multiplication with , and on the other hand it also allows the periodization of by convolution with (). The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions. Scaling The scaling property of the Dirac comb follows from the properties of the Dirac delta function. Since for positive real numbers , it " https://en.wikipedia.org/wiki/Microsoft%20Support%20Diagnostic%20Tool,"The Microsoft Support Diagnostic Tool (MSDT) is a legacy service in Microsoft Windows that allows Microsoft technical support agents to analyze diagnostic data remotely for troubleshooting purposes. In April 2022 it was observed to have a security vulnerability that allowed remote code execution which was being exploited to attack computers in Russia and Belarus, and later against the Tibetan government in exile. Microsoft advised a temporary workaround of disabling the MSDT by editing the Windows registry. Use When contacting support the user is told to run MSDT and given a unique ""passkey"" which they enter. They are also given an ""incident number"" to uniquely identify their case. The MSDT can also be run offline which will generate a .CAB file which can be uploaded from a computer with an internet connection. Security Vulnerabilities Follina Follina is the name given to a remote code execution (RCE) vulnerability, a type of arbitrary code execution (ACE) exploit, in the Microsoft Support Diagnostic Tool (MSDT) which was first widely publicized on May 27, 2022, by a security research group called Nao Sec. This exploit allows a remote attacker to use a Microsoft Office document template to execute code via MSDT. This works by exploiting the ability of Microsoft Office document templates to download additional content from a remote server. If the size of the downloaded content is large enough it causes a buffer overflow allowing a payload of Powershell code to be executed without explicit notification to the user. On May 30 Microsoft issued CVE-2022-30190 with guidance that users should disable MSDT. Malicious actors have been observed exploiting the bug to attack computers in Russia and Belarus since April, and it is believed Chinese state actors had been exploiting it to attack the Tibetan government in exile based in India. Microsoft patched this vulnerability in its June 2022 patches. DogWalk The DogWalk vulnerability is a remote code execution (RCE) vulne" https://en.wikipedia.org/wiki/Bandwidth%20expansion,"Bandwidth expansion is a technique for widening the bandwidth or the resonances in an LPC filter. This is done by moving all the poles towards the origin by a constant factor . The bandwidth-expanded filter can be easily derived from the original filter by: Let be expressed as: The bandwidth-expanded filter can be expressed as: In other words, each coefficient in the original filter is simply multiplied by in the bandwidth-expanded filter. The simplicity of this transformation makes it attractive, especially in CELP coding of speech, where it is often used for the perceptual noise weighting and/or to stabilize the LPC analysis. However, when it comes to stabilizing the LPC analysis, lag windowing is often preferred to bandwidth expansion." https://en.wikipedia.org/wiki/Biometric%20device,"A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition. History Biometric devices have been in use for thousands of years. Non-automated biometric devices have in use since 500 BC, when ancient Babylonians would sign their business transactions by pressing their fingertips into clay tablets. Automation in biometric devices was first seen in the 1960s. The Federal Bureau of Investigation (FBI) in the 1960s, introduced the Indentimat, which started checking for fingerprints to maintain criminal records. The first systems measured the shape of the hand and the length of the fingers. Although discontinued in the 1980s, the system set a precedent for future Biometric Devices. Types of biometric devices There are two categories of biometric devices, Contact Devices - These types of devices need contact of body part of live persons. They are mainly fingerprint scanners, either single fingerprint, dual fingerprint or slap (4+4+2) fingerprint scanners, and hand geometry scanners. Contactless Devices - These devices don't need any type of contact. The main examples of these are face, iris, retina and palm vein scanners and voice identification devices. Subgroups The characteristic of the human body is used to access information by the users. According to these characteristics, the sub-divided groups are Chemical biometric devices: Analyses the segments of the DNA to grant access to the users. Visual biometric devices: Analyses the visual features of the humans to grant access which includes iris recognition, face recognition, Finger recognition, and Retina Recognition. Behavioral biometric devices: Analyses the Walking Ability and Signatures (velocity of sign, width of sign, pressure of sign) distinct to" https://en.wikipedia.org/wiki/Anamorphosis,"Anamorphosis is a distorted projection that requires the viewer to occupy a specific vantage point, use special devices, or both to view a recognizable image. It is used in painting, photography, sculpture and installation, toys, and film special effects. The word is derived from the Greek prefix ana-, meaning ""back"" or ""again"", and the word morphe, meaning ""shape"" or ""form"". Extreme anamorphosis has been used by artists to disguise caricatures, erotic and scatological scenes, and other furtive images from a casual spectator, while revealing an undistorted image to the knowledgeable viewer. Types of projection There are two main types of anamorphosis: perspective (oblique) and mirror (catoptric). More complex anamorphoses can be devised using distorted lenses, mirrors, or other optical transformations. An oblique anamorphism forms an affine transformation of the subject. Early examples of perspectival anamorphosis date to the Renaissance of the fifteenth century and largely relate to religious themes. With mirror anamorphosis, a conical or cylindrical mirror is placed on the distorted drawing or painting to reveal an undistorted image. The deformed picture relies on laws regarding angles of incidence of reflection. The length of the flat drawing's curves are reduced when viewed in a curved mirror, such that the distortions resolve into a recognizable picture. Unlike perspective anamorphosis, catoptric images can be viewed from many angles. The technique was originally developed in China during the Ming Dynasty, and the first European manual on mirror anamorphosis was published around 1630 by the mathematician Vaulezard. Channel anamorphosis or tabula scalata has a different image on each side of a corrugated carrier. A straight frontal view shows an unclear mix of the images, while each image can be viewed correctly from a certain angle. History Prehistory The Stone Age cave paintings at Lascaux may make use of anamorphic technique, because the oblique angle" https://en.wikipedia.org/wiki/Blind%20equalization,"Blind equalization is a digital signal processing technique in which the transmitted signal is inferred (equalized) from the received signal, while making use only of the transmitted signal statistics. Hence, the use of the word blind in the name. Blind equalization is essentially blind deconvolution applied to digital communications. Nonetheless, the emphasis in blind equalization is on online estimation of the equalization filter, which is the inverse of the channel impulse response, rather than the estimation of the channel impulse response itself. This is due to blind deconvolution common mode of usage in digital communications systems, as a means to extract the continuously transmitted signal from the received signal, with the channel impulse response being of secondary intrinsic importance. The estimated equalizer is then convolved with the received signal to yield an estimation of the transmitted signal. Problem statement Noiseless model Assuming a linear time invariant channel with impulse response , the noiseless model relates the received signal to the transmitted signal via The blind equalization problem can now be formulated as follows; Given the received signal , find a filter , called an equalization filter, such that where is an estimation of . The solution to the blind equalization problem is not unique. In fact, it may be determined only up to a signed scale factor and an arbitrary time delay. That is, if are estimates of the transmitted signal and channel impulse response, respectively, then give rise to the same received signal for any real scale factor and integral time delay . In fact, by symmetry, the roles of and are Interchangeable. Noisy model In the noisy model, an additional term, , representing additive noise, is included. The model is therefore Algorithms Many algorithms for the solution of the blind equalization problem have been suggested over the years. However, as one usually has access to only a finite number of s" https://en.wikipedia.org/wiki/Arithmetic%20logic%20unit,"In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs). The inputs to an ALU are the data to be operated on, called operands, and a code indicating the operation to be performed; the ALU's output is the result of the performed operation. In many designs, the ALU also has status inputs or outputs, or both, which convey information about a previous operation or the current operation, respectively, between the ALU and external status registers. Signals An ALU has a variety of input and output nets, which are the electrical conductors used to convey digital signals between the ALU and external circuitry. When an ALU is operating, external circuits apply signals to the ALU inputs and, in response, the ALU produces and conveys signals to external circuitry via its outputs. Data A basic ALU has three parallel data buses consisting of two input operands (A and B) and a result output (Y). Each data bus is a group of signals that conveys one binary integer number. Typically, the A, B and Y bus widths (the number of signals comprising each bus) are identical and match the native word size of the external circuitry (e.g., the encapsulating CPU or other processor). Opcode The opcode input is a parallel bus that conveys to the ALU an operation selection code, which is an enumerated value that specifies the desired arithmetic or logic operation to be performed by the ALU. The opcode size (its bus width) determines the maximum number of distinct operations the ALU can perform; for example, a four-bit opcode can specify up to sixteen different ALU operations. Generally, an ALU opcode is not the same as a machine langua" https://en.wikipedia.org/wiki/Abstract%20index%20notation,"Abstract index notation (also referred to as slot-naming index notation) is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any basis and, in particular, are non-numerical. Thus it should not be confused with the Ricci calculus. The notation was introduced by Roger Penrose as a way to use the formal aspects of the Einstein summation convention to compensate for the difficulty in describing contractions and covariant differentiation in modern abstract tensor notation, while preserving the explicit covariance of the expressions involved. Let be a vector space, and its dual space. Consider, for example, an order-2 covariant tensor . Then can be identified with a bilinear form on . In other words, it is a function of two arguments in which can be represented as a pair of slots: Abstract index notation is merely a labelling of the slots with Latin letters, which have no significance apart from their designation as labels of the slots (i.e., they are non-numerical): A tensor contraction (or trace) between two tensors is represented by the repetition of an index label, where one label is contravariant (an upper index corresponding to the factor ) and one label is covariant (a lower index corresponding to the factor ). Thus, for instance, is the trace of a tensor over its last two slots. This manner of representing tensor contractions by repeated indices is formally similar to the Einstein summation convention. However, as the indices are non-numerical, it does not imply summation: rather it corresponds to the abstract basis-independent trace operation (or natural pairing) between tensor factors of type and those of type . Abstract indices and tensor spaces A general homogeneous tensor is an element of a tensor product of copies of and , such as Label each factor in this tensor product with a Latin letter" https://en.wikipedia.org/wiki/VLSI%20Project,"The VLSI Project was a DARPA-program initiated by Robert Kahn in 1978 that provided research funding to a wide variety of university-based teams in an effort to improve the state of the art in microprocessor design, then known as Very Large Scale Integration (VLSI). The VLSI Project is one of the most influential research projects in modern computer history. Its offspring include Berkeley Software Distribution (BSD) Unix, the reduced instruction set computer (RISC) processor concept, many computer-aided design (CAD) tools still in use today, 32-bit graphics workstations, fabless manufacturing and design houses, and its own semiconductor fabrication plant (fab), MOSIS, starting in 1981. A similar DARPA project partnering with industry, VHSIC had little or no impact. The VLSI Project was central in promoting the Mead and Conway revolution throughout industry. Project New design rules In 1975, Carver Mead, Tom Everhart and Ivan Sutherland of Caltech wrote a report for ARPA on the topic of microelectronics. Over the previous few years, Mead had coined the term ""Moore's law"" to describe Gordon Moore's 1965 prediction for the growth rate of complexity, and in 1974, Robert Dennard of IBM noted that the scale shrinking that formed the basis of Moore's law also affected the performance of the systems. These combined effects implied a massive increase in computing power was about to be unleashed on the industry. The report, published in 1976, suggested that ARPA fund development across a number of fields in order to deal with the complexity that was about to appear due to these ""very-large-scale integrated circuits"". Later that year, Sutherland wrote a letter to his brother Bert who was at that time working at Xerox PARC. He suggested a joint effort between PARC and Caltech to begin studying these issues. Bert agreed to form a team, inviting Lynn Conway and Doug Fairbairn to join. Conway had previously worked at IBM on a supercomputer project known as ACS-1. After consid" https://en.wikipedia.org/wiki/Toroidal%20solenoid,"The toroidal solenoid was an early 1946 design for a fusion power device designed by George Paget Thomson and Moses Blackman of Imperial College London. It proposed to confine a deuterium fuel plasma to a toroidal (donut-shaped) chamber using magnets, and then heating it to fusion temperatures using radio frequency energy in the fashion of a microwave oven. It is notable for being the first such design to be patented, filing a secret patent on 8 May 1946 and receiving it in 1948. A critique by Rudolf Peierls noted several problems with the concept. Over the next few years, Thomson continued to suggest starting an experimental effort to study these issues, but was repeatedly denied as the underlying theory of plasma diffusion was not well developed. When similar concepts were suggested by Peter Thonemann that included a more practical heating arrangement, John Cockcroft began to take the concept more seriously, establishing small study groups at Harwell. Thomson adopted Thonemann's concept, abandoning the radio frequency system. When the patent had still not been granted in early 1948, the Ministry of Supply inquired about Thomson's intentions. Thomson explained the problems he had getting a program started and that he did not want to hand off the rights until that was clarified. As the directors of the UK nuclear program, the Ministry quickly forced Harwell's hand to provide funding for Thomson's program. Thomson then released his rights the patent, which was granted late that year. Cockcroft also funded Thonemann's work, and with that, the UK fusion program began in earnest. After the news furor over the Huemul Project in February 1951, significant funding was released and led to rapid growth of the program in the early 1950s, and ultimately to the ZETA reactor of 1958. Conceptual development The basic understanding of nuclear fusion was developed during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 work on quantum t" https://en.wikipedia.org/wiki/Autonomous%20things,"Autonomous things, abbreviated AuT, or the Internet of autonomous things, abbreviated as IoAT, is an emerging term for the technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects. Self-navigating drones are the first AuT technology in (limited) deployment. It is expected that the first mass-deployment of AuT technologies will be the autonomous car, generally expected to be available around 2020. Other currently expected AuT technologies include home robotics (e.g., machines that provide care for the elderly, infirm or young), and military robots (air, land or sea autonomous machines with information-collection or target-attack capabilities). AuT technologies share many common traits, which justify the common notation. They are all based on recent breakthroughs in the domains of (deep) machine learning and artificial intelligence. They all require extensive and prompt regulatory developments to specify the requirements from them and to license and manage their deployment (see the further reading below). And they all require unprecedented levels of safety (e.g., automobile safety) and security, to overcome concerns about the potential negative impact of the new technology. As an example, the autonomous car both addresses the main existing safety issues and creates new issues. It is expected to be much safer than existing vehicles, by eliminating the single most dangerous elementthe driver. The US's National Highway Traffic Safety Administration estimates 94 percent of US accidents were the result of human error and poor decision-making, including speeding and impaired driving, and the Center for Internet and Society at Stanford Law School claims that ""Some ninety percent of motor vehicle crashes are caused at least in part by human error"". So while safety standards like the ISO 26262 specify the required safety, there is" https://en.wikipedia.org/wiki/Shit%20flow%20diagram,"A shit flow diagram (also called excreta flow diagram or SFD) is a high level technical drawing used to display how excreta moves through a location, and functions as a tool to identify where improvements are needed. The diagram has a particular focus on treatment of the waste, and its final disposal or use. SFDs are most often used in developing countries. Development In 2012–2013, the World Bank's Water and Sanitation Program sponsored a study on the fecal sludge management of twelve cities with the goal of developing tools for better understanding the flow of excreta through the cities. As a result, Isabel Blackett, Peter Hawkins, and Christiaan Heymans authored The missing link in sanitation service delivery: a review of fecal sludge management in 12 cities. Using this as a basis, a group of excreta management institutions began collaborating in June 2014 to continue development of SFDs. In November 2014, the SFD Promotion Initiative was started with funding from the Bill & Melinda Gates Foundation. Initially funded as a one year project, it was extended in 2015. In September 2019, the focus of the program shifted to scaling up the current methods of producing SFDs to allow for citywide sanitation in South Asia and Africa. As of 2021 more than 140 shit flow diagram reports have been published. The initiative is managed as part of the Sustainable Sanitation Alliance and is supported by the Bill and Melinda Gates Foundation. It is partnered with many nonprofit organizations such as the Centre for Science and Environment, EAWAG, and the Global Water Security & Sanitation Partnership. Use in developing countries The great majority of those living in urban areas, especially the poor, use non-sewer sanitation systems. This poses environmental and health challenges for growing urban areas in developing countries, and many of these countries will need to change their sanitation strategies as their population grows. Using a shit flow diagram allows political leaders" https://en.wikipedia.org/wiki/Abstraction%20layer,"In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libraries, which allow the separation of concerns to facilitate interoperability and platform independence. Another example is Media Transfer Protocol. In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized. A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions. A famous aphorism of David Wheeler is, ""All problems in computer science can be solved by another level of indirection."" This is often deliberately misquoted with ""abstraction"" substituted for ""indirection."" It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, ""...except for the problem of too many layers of indirection."" Computer architecture In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as: software programmable logic hardware Programmable logic is often considered part of the hardware, while " https://en.wikipedia.org/wiki/Non-cellular%20life,"Non-cellular life, also known as acellular life, is life that exists without a cellular structure for at least part of its life cycle. Historically, most definitions of life postulated that an organism must be composed of one or more cells, but this is no longer considered necessary, and modern criteria allow for forms of life based on other structural arrangements. The primary candidates for non-cellular life are viruses. Some biologists consider viruses to be organisms, but others do not. Their primary objection is that no known viruses are capable of autonomous reproduction; they must rely on cells to copy them. Viruses as non-cellular life The nature of viruses was unclear for many years following their discovery as pathogens. They were described as poisons or toxins at first, then as ""infectious proteins"", but with advances in microbiology it became clear that they also possessed genetic material, a defined structure, and the ability to spontaneously assemble from their constituent parts. This spurred extensive debate as to whether they should be regarded as fundamentally organic or inorganic — as very small biological organisms or very large biochemical molecules — and since the 1950s many scientists have thought of viruses as existing at the border between chemistry and life; a gray area between living and nonliving. Viral replication and self-assembly has implications for the study of the origin of life, as it lends further credence to the hypotheses that cells and viruses could have started as a pool of replicators where selfish genetic information was parasitizing on producers in RNA world, as two strategies to survive, gained in response to environmental conditions, or as self-assembling organic molecules. Viroids Viroids are the smallest infectious pathogens known to biologists, consisting solely of short strands of circular, single-stranded RNA without protein coats. They are mostly plant pathogens and some are animal pathogens, from which some ar" https://en.wikipedia.org/wiki/VARAN,"VARAN (Versatile Automation Random Access Network) is a Fieldbus Ethernet-based industrial communication system. VARAN is a wired data network technology for local data networks (LAN) with the main application in the field of automation technology. It enables the exchange of data in the form of data frames between all LAN connected devices (controllers, input/output devices, drives, etc.). VARAN includes the definitions for types of cables and connectors, describes the physical signalling and specifies packet formats and protocols. From the perspective of the OSI model, VARAN specifies both the physical layer (OSI Layer 1) and the data link layer (OSI Layer 2). VARAN is a protocol according to the principle master-slave. The VARAN BUS USER ORGANIZATION (VNO) is responsible for the care of the Protocol." https://en.wikipedia.org/wiki/General-purpose%20input/output,"A general-purpose input/output (GPIO) is an uncommitted digital signal pin on an integrated circuit or electronic circuit (e.g. MCUs/MPUs) board which may be used as an input or output, or both, and is controllable by software. GPIOs have no predefined purpose and are unused by default. If used, the purpose and behavior of a GPIO is defined and implemented by the designer of higher assembly-level circuitry: the circuit board designer in the case of integrated circuit GPIOs, or system integrator in the case of board-level GPIOs. Integrated circuit GPIOs Integrated circuit (IC) GPIOs are implemented in a variety of ways. Some ICs provide GPIOs as a primary function whereas others include GPIOs as a convenient ""accessory"" to some other primary function. Examples of the former include the Intel 8255, which interfaces 24 GPIOs to a parallel communication bus, and various GPIO expander ICs, which interface GPIOs to serial communication buses such as I²C and SMBus. An example of the latter is the Realtek ALC260 IC, which provides eight GPIOs along with its main function of audio codec. Microcontroller ICs usually include GPIOs. Depending on the application, a microcontroller's GPIOs may comprise its primary interface to external circuitry or they may be just one type of I/O used among several, such as analog signal I/O, counter/timer, and serial communication. In some ICs, particularly microcontrollers, a GPIO pin may be capable of other functions than GPIO. Often in such cases it is necessary to configure the pin to operate as a GPIO (vis-á-vis its other functions) in addition to configuring the GPIO's behavior. Some microcontroller devices (e.g., Microchip dsPIC33 family) incorporate internal signal routing circuitry that allows GPIOs to be programmatically mapped to device pins. Field-programmable gate arrays (FPGA) extend this ability by allowing GPIO pin mapping, instantiation and architecture to be programmatically controlled. Board-level GPIOs Many circuit boar" https://en.wikipedia.org/wiki/Tetrad%20formalism,"The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary for . In German, """" translates to ""four"", and """" to ""many"". The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique. The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions. The significance of the tetradic formalism appear in the E" https://en.wikipedia.org/wiki/Programmable%20load,"A programmable load is a type of test equipment or instrument which emulates DC or AC resistance loads normally required to perform functional tests of batteries, power supplies or solar cells. By virtue of being programmable, tests like load regulation, battery discharge curve measurement and transient tests can be fully automated and load changes for these tests can be made without introducing switching transient that might change the measurement or operation of the power source under test. Implementation Programmable loads most commonly use one transistor/FET, or an array of parallel connected transistors/FETs for more current handling, to act as a variable resistor. Internal circuitry in the equipment monitors the actual current through the transistor/FET, compares it to a user-programmed desired current, and through an error amplifier changes the drive voltage to the transistor/FET to dynamically change its resistance. This 'negative feedback' results in the actual current always matching the programmed desired current, regardless of other changes in the supplied voltage or other variables. Of course, if the power source is not able to supply the desired amount of current, the DC load equipment cannot furnish the difference; it can restrict current to a level, but it cannot boost current to a higher level. Most commercial DC loads are equipped with microprocessor front end circuits that allow the user to not only program a desired current through the load ('constant current' or CC), but the user can alternatively program the load to have a constant resistance (CR) or constant power dissipation (CP). Electronic test equipment Hardware testing Electronic engineering" https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition,"Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition (WPD) (sometimes known as just wavelet packets or subband tree), is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT). Introduction In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low- and high-pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree. For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy. From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. There are several algorithms for subband tree structuring that find a set of optimal bases that provide the most desirable representation of the data relative to a particular cost function (entropy, energy compaction, etc.). There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others. Discrete wavelet transform theory (continuous in the time variable) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete-time subband transform theory enables a perfect representation of already sa" https://en.wikipedia.org/wiki/Taxonomic%20boundary%20paradox,"The term boundary paradox refers to the conflict between traditional, rank-based classification of life and evolutionary thinking. In the hierarchy of ranked categories it is implicitly assumed that the morphological gap is growing along with increasing ranks: two species from the same genus are more similar than other two species from different genera in the same family, these latter two species are more similar than any two species from different families of the same order, and so on. However, this requirement may only satisfy for the classification of contemporary organisms; difficulties arise if we wish to classify descendants together with their ancestors. Theoretically, such a classification necessarily involves segmentation of the spatio-temporal continuum of populations into groups with crisp boundaries. However, the problem is not only that many parent populations would separate at species level from their offspring. The truly paradoxical situation is that some between-species boundaries would necessarily coincide with between-genus boundaries, and a few between-genus boundaries with borders between families, and so on. This apparent ambiguity cannot be resolved in Linnaean systems; resolution is only possible if classification is cladistic (see below). Historical background Jean-Baptiste Lamarck, in Philosophie zoologique (1809), was the first who questioned the objectivity of rank-based classification of life, by saying: Half a century later, Charles Darwin explained that sharp separation of groups of organisms observed at present becomes less obvious if we go back into the past: In his book on orchids, Darwin also warned that the system of ranks would not work if we knew more details about past life: Finally, Richard Dawkins has argued recently that and with the following conclusion: Illustrative models The paradox may be best illustrated by model diagrams similar to Darwin’s single evolutionary tree in On the Origin of Species. In these tree grap" https://en.wikipedia.org/wiki/Computer%20module,"A computer module is a selection of independent electronic circuits packaged onto a circuit board to provide a basic function within a computer. An example might be an inverter or flip-flop, which would require two or more transistors and a small number of additional supporting devices. Modules would be inserted into a chassis and then wired together to produce a larger logic unit, like an adder. History Modules were the basic building block of most early computer designs, until they started being replaced by integrated circuits in the 1960s, which were essentially an entire module packaged onto a single computer chip. Modules with discrete components continued to be used in specialist roles into the 1970s, notably high-speed modular designs like the CDC 8600, but advances in chip design led to the disappearance of the discrete-component module in the 1970s. See also Modularity" https://en.wikipedia.org/wiki/Proofs%20from%20THE%20BOOK,"Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler. The book is dedicated to the mathematician Paul Erdős, who often referred to ""The Book"" in which God keeps the most elegant proof of each mathematical theorem. During a lecture in 1985, Erdős said, ""You don't have to believe in God, but you should believe in The Book."" Content Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory, geometry, analysis, combinatorics and graph theory. Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian and Spanish. In November 2017 the American Mathematical Society announced the 2018 Leroy P. Steele Prize for Mathematical Exposition to be awarded to Aigner and Ziegler for this book. The proofs include: Six proofs of the infinitude of the primes, including Euclid's and Furstenberg's Proof of Bertrand's postulate Fermat's theorem on sums of two squares Two proofs of the Law of quadratic reciprocity Proof of Wedderburn's little theorem asserting that every finite division ring is a field Four proofs of the Basel problem Proof that e is irrational (also showing the irrationality of certain related numbers) Hilbert's third problem Sylvester–Gallai theorem and De Bruijn–Erdős theorem Cauchy's theorem Borsuk's conjecture Schröder–Bernstein theorem Wetzel's problem on families of analytic functions with few distinct values The fundamental theorem of algebra Monsky's theorem (4th edition) Van der Waerden's conjecture Littlewood–Offord lemma Buffon's needle problem Sperner's theorem, Erdős–Ko–Rado theorem and Hall's theorem Lindström" https://en.wikipedia.org/wiki/Readout%20integrated%20circuit,"A Readout integrated circuit (ROIC) is an integrated circuit (IC) specifically used for reading detectors of a particular type. They are compatible with different types of detectors such as infrared and ultraviolet. The primary purpose for ROICs is to accumulate the photocurrent from each pixel and then transfer the resultant signal onto output taps for readout. Conventional ROIC technology stores the signal charge at each pixel and then routes the signal onto output taps for readout. This requires storing large signal charge at each pixel site and maintaining signal-to-noise ratio (or dynamic range) as the signal is read out and digitized. A ROIC has high-speed analog outputs to transmit pixel data outside of the integrated circuit. If digital outputs are implemented, the IC is referred to as a Digital Readout Integrated Circuit (DROIC). A Digital readout integrated circuit (DROIC) is a class of ROIC that uses on-chip analog-to-digital conversion (ADC) to digitize the accumulated photocurrent in each pixel of the imaging array. DROICs are easier to integrate into a system compared to ROICs as the package size and complexity are reduced, they are less sensitive to noise and have higher bandwidth compared to analog outputs. A Digital pixel readout integrated circuit (DPROIC) is a ROIC that uses on-chip analog-to-digital conversion (ADC) within each pixel (or small group of pixels) to digitize the accumulated photocurrent within the imaging array. DPROICs have an even higher bandwidth than DROICs and can significantly increase the well capacity and dynamic range of the device." https://en.wikipedia.org/wiki/Rod%20calculus,"Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in Song Dynasty and Yuan Dynasty, culminating in the invention of polynomial equations of up to four unknowns in the work of Zhu Shijie. Hardware The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand. In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed. The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han Dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago. Software The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike. Rod numerals Displaying numbers Rod numerals is the only numeric system that uses" https://en.wikipedia.org/wiki/Algebraic%20signal%20processing,"Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability. History In the original formulation of algebraic signal processing by Puschel and Moura, the signals are collected in an -module for some algebra of filters, and filtering is given by the action of on the -module. Definitions Let be a field, for instance the complex numbers, and be a -algebra (i.e. a vector space over with a binary operation that is linear in both arguments) treated as a set of filters. Suppose is a vector space representing a set signals. A representation of consists of an algebra homomorphism where is the algebra of linear transformations with composition (equivalent, in the finite-dimensional case, to matrix multiplication). For convenience, we write for the endomorphism . To be an algebra homomorphism, must not only be a linear transformation, but also satisfy the propertyGiven a signal , convolution of the signal by a filter yields a new signal . Some additional terminology is needed from the representation theory of algebras. A subset is said to generate the algebra if every element of can be represented as polynomials in the elements of . The image of a generator is called a shift operator. In all practically all examples, convolutions are formed as polynomials in generated by shift operators. However, this is not necessarily the case for a representation of an arbitrary algebra. Examples Discrete Signal Processing In discrete signal processing (DSP), the signal space is the set of complex-valued functions with bounded energy (i.e. square-integrable functions). This means the infinite series where is the modulus of a complex number. T" https://en.wikipedia.org/wiki/Super%20Bloch%20oscillations,"In physics, a Super Bloch oscillation describes a certain type of motion of a particle in a lattice potential under external periodic driving. The term super refers to the fact that the amplitude in position space of such an oscillation is several orders of magnitude larger than for 'normal' Bloch oscillations. Bloch oscillations vs. Super Bloch oscillations Normal Bloch oscillations and Super Bloch oscillations are closely connected. In general, Bloch oscillations are a consequence of the periodic structure of the lattice potential and the existence of a maximum value of the Bloch wave vector . A constant force results in the acceleration of the particle until the edge of the first Brillouin zone is reached. The following sudden change in velocity from to can be interpreted as a Bragg scattering of the particle by the lattice potential. As a result, the velocity of the particle never exceeds but oscillates in a saw-tooth like manner with a corresponding periodic oscillation in position space. Surprisingly, despite of the constant acceleration the particle does not translate, but just moves over very few lattice sites. Super Bloch oscillations arise when an additional periodic driving force is added to , resulting in: The details of the motion depend on the ratio between the driving frequency and the Bloch frequency . A small detuning results in a beat between the Bloch cycle and the drive, with a drastic change of the particle motion. On top of the Bloch oscillation, the motion shows a much larger oscillation in position space that extends over hundreds of lattice sites. Those Super Bloch oscillations directly correspond to the motion of normal Bloch oscillations, just rescaled in space and time. A quantum mechanical description of the rescaling can be found here. An experimental realization is demonstrated in these. A theoretical analysis of the properties of Super-Bloch Oscillations, including dependence on the phase of the driving field is found here." https://en.wikipedia.org/wiki/Gravity-assisted%20microdissection,"Gravity-assisted microdissection (GAM) is one of the laser microdissection methods. The dissected material is allowed to fall by gravity into a cap and may thereafter be used for isolating proteins or genetic material. Two manufacturers in the world have developed their own device based on GAM method. Microdissection procedure In the case of ION LMD system, after preparing sample and staining, transfer tissue on window slide. The slide is mounted inversely. Motorized stage moves to pre-selected drawing line and laser beam cuts the cells of interests by laser ablation. Selected cells are collected in the tube cap which is under the slide via gravity. Application Dissected materials such as single cells or cell populations of interests are used for these further researches. Molecular pathology Cell biology Genomics Cancer research Pharmaceutical research Veterinary medicine Forensic analysis Reproductive medicine" https://en.wikipedia.org/wiki/List%20of%20countries%20by%20medal%20count%20at%20International%20Mathematical%20Olympiad,"The following is the complete list of countries by medal count at the International Mathematical Olympiad: Notes A. This team is now defunct." https://en.wikipedia.org/wiki/List%20of%20genetic%20algorithm%20applications,"This is a list of genetic algorithm (GA) applications. Natural Sciences, Mathematics and Computer Science Bayesian inference links to particle methods in Bayesian statistics and hidden Markov chain models Artificial creativity Chemical kinetics (gas and solid phases) Calculation of bound states and local-density approximations Code-breaking, using the GA to search large solution spaces of ciphers for the one correct decryption. Computer architecture: using GA to find out weak links in approximate computing such as lookahead. Configuration applications, particularly physics applications of optimal molecule configurations for particular systems like C60 (buckyballs) Construction of facial composites of suspects by eyewitnesses in forensic science. Data Center/Server Farm. Distributed computer network topologies Electronic circuit design, known as evolvable hardware Feature selection for Machine Learning Feynman-Kac models File allocation for a distributed system Filtering and signal processing Finding hardware bugs. Game theory equilibrium resolution Genetic Algorithm for Rule Set Production Scheduling applications, including job-shop scheduling and scheduling in printed circuit board assembly. The objective being to schedule jobs in a sequence-dependent or non-sequence-dependent setup environment in order to maximize the volume of production while minimizing penalties such as tardiness. Satellite communication scheduling for the NASA Deep Space Network was shown to benefit from genetic algorithms. Learning robot behavior using genetic algorithms Image processing: Dense pixel matching Learning fuzzy rule base using genetic algorithms Molecular structure optimization (chemistry) Optimisation of data compression systems, for example using wavelets. Power electronics design. Traveling salesman problem and its applications Earth Sciences Climatology: Estimation of heat flux between the atmosphere and sea ice Climatology: Modelling global te" https://en.wikipedia.org/wiki/Heterotrophic%20nutrition,"Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive. All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus. Description All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition. All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition. Footnotes" https://en.wikipedia.org/wiki/Chemotroph,"A chemotroph is an organism that obtains energy by the oxidation of electron donors in their environments. These molecules can be organic (chemoorganotrophs) or inorganic (chemolithotrophs). The chemotroph designation is in contrast to phototrophs, which use photons. Chemotrophs can be either autotrophic or heterotrophic. Chemotrophs can be found in areas where electron donors are present in high concentration, for instance around hydrothermal vents. Chemoautotroph Chemoautotrophs, in addition to deriving energy from chemical reactions, synthesize all necessary organic compounds from carbon dioxide. Chemoautotrophs can use inorganic energy sources such as hydrogen sulfide, elemental sulfur, ferrous iron, molecular hydrogen, and ammonia or organic sources to produce energy. Most chemoautotrophs are extremophiles, bacteria or archaea that live in hostile environments (such as deep sea vents) and are the primary producers in such ecosystems. Chemoautotrophs generally fall into several groups: methanogens, sulfur oxidizers and reducers, nitrifiers, anammox bacteria, and thermoacidophiles. An example of one of these prokaryotes would be Sulfolobus. Chemolithotrophic growth can be dramatically fast, such as Hydrogenovibrio crunogenus with a doubling time around one hour. The term ""chemosynthesis"", coined in 1897 by Wilhelm Pfeffer, originally was defined as the energy production by oxidation of inorganic substances in association with autotrophy—what would be named today as chemolithoautotrophy. Later, the term would include also the chemoorganoautotrophy, that is, it can be seen as a synonym of chemoautotrophy. Chemoheterotroph Chemoheterotrophs (or chemotrophic heterotrophs) are unable to fix carbon to form their own organic compounds. Chemoheterotrophs can be chemolithoheterotrophs, utilizing inorganic electron sources such as sulfur, or, much more commonly, chemoorganoheterotrophs, utilizing organic electron sources such as carbohydrates, lipids, and proteins." https://en.wikipedia.org/wiki/Network%20information%20system,"A network information system (NIS) is an information system for managing networks, such as electricity network, water supply network, gas supply network, telecommunications network., or street light network NIS may manage all data relevant to the network, e.g.- all components and their attributes, the connectivity between them and other information, relating to the operation, design and construction of such networks. NIS for electricity may manage any, some or all voltage levels- Extra High, High, Medium and low voltage. It may support only the distribution network or also the transmission network. Telecom NIS typically consists of the physical network inventory and logical network inventory. Physical network inventory is used to manage outside plant components, such as cables, splices, ducts, trenches, nodes and inside plant components such as active and passive devices. The most differentiating factor of telecom NIS from traditional GIS is the capability of recording thread level connectivity. The logical network inventory is used to manage the logical connections and circuits utilizing the logical connections. Traditionally, the logical network inventory has been a separate product but in most modern systems the functionality is built in the GIS serving both the functionality of the physical network and logical network. Water network information system typically manages the water network components, such as ducts, branches, valves, hydrants, reservoirs and pumping stations. Some systems such as include the water consumers as well as water meters and their readings in the NIS. Sewage and stormwater components are typically included in the NIS. By adding sensors as well as analysis and calculations based on the measured values the concept of Smart water system is included in the NIS. By adding actuators into the network the concept of SCADA can be included in the NIS. NIS may be built on top of a GIS (Geographical information system). Private Cloud based NIS " https://en.wikipedia.org/wiki/List%20of%20wavelet-related%20transforms,"A list of wavelet related transforms: Continuous wavelet transform (CWT) Discrete wavelet transform (DWT) Multiresolution analysis (MRA) Lifting scheme Binomial QMF (BQMF) Fast wavelet transform (FWT) Complex wavelet transform Non or undecimated wavelet transform, the downsampling is omitted Newland transform, an orthonormal basis of wavelets is formed from appropriately constructed top-hat filters in frequency space Wavelet packet decomposition (WPD), detail coefficients are decomposed and a variable tree can be formed Stationary wavelet transform (SWT), no downsampling and the filters at each level are different e-decimated discrete wavelet transform, depends on if the even or odd coefficients are selected in the downsampling Second generation wavelet transform (SGWT), filters and wavelets are not created in the frequency domain Dual-tree complex wavelet transform (DTCWT), two trees are used for decomposion to produce the real and complex coefficients WITS: Where Is The Starlet, a collection of a hundredth of wavelet names in -let and associated multiscale, directional, geometric, representations, from activelets to x-lets through bandelets, chirplets, contourlets, curvelets, noiselets, wedgelets ... Transforms Wavelet-related transforms" https://en.wikipedia.org/wiki/System%20on%20module,"A system on a module (SoM) is a board-level circuit that integrates a system function in a single module. It may integrate digital and analog functions on a single board. A typical application is in the area of embedded systems. Unlike a single-board computer, a SoM serves a special function like a system on a chip (SoC). The devices integrated in the SoM typically requires a high level of interconnection for reasons such as speed, timing, bus width etc.. There are benefits in building a SoM, as for SoC; one notable result is to reduce the cost of the base board or the main PCB. Two other major advantages of SoMs are design-reuse and that they can be integrated into many embedded computer applications. History The acronym SoM has its roots in the blade-based modules. In the mid 1980s, when VMEbus blades used M-Modules, these were commonly referred to as system On a module (SoM). These SoMs performed specific functions such as compute functions and data acquisition functions. SoMs were used extensively by Sun Microsystems, Motorola, Xerox, DEC, and IBM in their blade computers. Design A typical SoM consists of: at least one microcontroller, microprocessor or digital signal processor (DSP) core multiprocessor systems-on-chip (MPSoCs) have more than one processor core memory blocks including a selection of ROM, RAM, EEPROM and/or flash memory timing sources industry standard communication interfaces such as USB, FireWire, Ethernet, USART, SPI, I²C peripherals including counter-timers, real-time timers and power-on reset generators analog interfaces including analog-to-digital converters and digital-to-analog converters voltage regulators and power management circuits See also" https://en.wikipedia.org/wiki/Simple%20programmable%20logic%20device,"A simple programmable logic device (SPLD) is a programmable logic device with complexity below that of a complex programmable logic device (CPLD). The term commonly refers to devices such as ROMs, PALs, PLAs and GALs. Basic description Simple programmable logic devices (SPLD) are the simplest, smallest and least-expensive forms of programmable logic devices. SPLDs can be used in boards to replace standard logic components (AND, OR, and NOT gates), such as 7400-series TTL. They typically comprise 4 to 22 fully connected macrocells. These macrocells typically consist of some combinatorial logic (such as AND OR gates) and a flip-flop. In other words, a small Boolean logic equation can be built within each macrocell. This equation will combine the state of some number of binary inputs into a binary output and, if necessary, store that output in the flip-flop until the next clock edge. Of course, the particulars of the available logic gates and flip-flops are specific to each manufacturer and product family. But the general idea is always the same. Most SPLDs use either fuses or non-volatile memory cells (EPROM, EEPROM, Flash, and others) to define the functionality. These devices are also known as: Programmable array logic (PAL) Generic array logic (GAL) Programmable logic arrays (PLA) Field-programmable logic arrays (FPLA) Programmable logic devices (PLD) Advantages PLDs are often used for address decoding, where they have several clear advantages over the 7400-series TTL parts that they replaced: One chip requires less board area, power, and wiring than several do. The design inside the chip is flexible, so a change in the logic does not require any rewiring of the board. Rather, simply replacing one PLD with another part that has been programmed with the new design can alter the decoding logic." https://en.wikipedia.org/wiki/Oophagy,"Oophagy ( ) sometimes ovophagy, literally ""egg eating"", is the practice of embryos feeding on eggs produced by the ovary while still inside the mother's uterus. The word oophagy is formed from the classical Greek (, ""egg"") and classical Greek (, ""to eat""). In contrast, adelphophagy is the cannibalism of a multi-celled embryo. Oophagy is thought to occur in all sharks in the order Lamniformes and has been recorded in the bigeye thresher (Alopias superciliosus), the pelagic thresher (A. pelagicus), the shortfin mako (Isurus oxyrinchus) and the porbeagle (Lamna nasus) among others. It also occurs in the tawny nurse shark (Nebrius ferrugineus), and in the family Pseudotriakidae. This practice may lead to larger embryos or prepare the embryo for a predatory lifestyle. There are variations in the extent of oophagy among the different shark species. The grey nurse shark (Carcharias taurus) practices intrauterine cannibalism, the first developed embryo consuming both additional eggs and any other developing embryos. Slender smooth-hounds (Gollum attenuatus), form egg capsules which contain 30-80 ova, within which only one ovum develops; the remaining ova are ingested and their yolks stored in its external yolk sac. The embryo then proceeds to develop normally, without ingesting further eggs. Oophagy is also used as a synonym of egg predation practised by some snakes and other animals. Similarly, the term can be used to describe the destruction of non-queen eggs in nests of certain social wasps, bees, and ants. This is seen in the wasp species Polistes biglumis and Polistes humilis. Oophagy has been observed in Leptothorax acervorum and Parachartergus fraternus, where oophagy is practiced to increase energy circulation and provide more dietary protein. Polistes fuscatus use oophagy as a method to establish a dominance hierarchy; dominant females will eat the eggs of subordinate females such that they no longer produce eggs, possibly due to the unnecessary expenditure " https://en.wikipedia.org/wiki/Chassis%20management%20controller,"A chassis management controller (CMC) is an embedded system management hardware and software solution to manage multiple servers, networking, and storage. A CMC can provide a secure browser-based interface that enables an IT system administrator to take inventory, perform configuration and monitoring tasks, remotely power on/off blade servers, and enable alerts for events on servers or components in the blade chassis. It has its own microprocessor and memory and is powered by the modular chassis it is plugged into. The inventory of hardware components is built-in and a CMC has a dedicated internal network. The blade enclosure, which can hold multiple blade servers, provides power, cooling, various interconnects, and additional systems management capabilities. Unlike a tower or rack server, a blade server cannot run by itself; it requires a compatible blade enclosure." https://en.wikipedia.org/wiki/Proofs%20That%20Really%20Count,"Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies. That is, it concerns equations between two integer-valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn, and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America. Topics The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). Several additional ""uncounted identities"" are also included. Many proofs are based on a visual-reasoning method that the authors call ""tiling"", and in a foreword, the authors describe their work as providing a follow-up for counting problems of the Proof Without Words books by Roger B. Nelson. The first three chapters of the book start with integer sequences defined by linear recurrence relations, the prototypical example of which is the sequence of Fibonacci numbers. These numbers can be given a combinatorial interpretation as the number of ways of tiling a strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, such as the Lucas numbers, using ""circular tilings and colored tilings"". For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions and of a strip of length immediately leads to the identity Chapters four through seven of the book concern identities involving continued fractions, binomial coef" https://en.wikipedia.org/wiki/Frequency%20response,"In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters. The frequency response characterizes systems in the frequency domain, just as the impulse response characterizes systems in the time domain. In linear systems (or as an approximation to a real system neglecting second order non-linear properties), either response completely describes the system and thus have one-to-one correspondence: the frequency response is the Fourier transform of the impulse response. The frequency response allows simpler analysis of cascaded systems such as multistage amplifiers, as the response of the overall system can be found through multiplication of the individual stages' frequency responses (as opposed to convolution of the impulse response in the time domain). The frequency response is closely related to the transfer function in linear systems, which is the Laplace transform of the impulse response. They are equivalent when the real part of the transfer function's complex variable is zero. Measurement and plotting Measuring the frequency response typically involves exciting the system with an input signal and measuring the resulting output signal, calculating the frequency spectra" https://en.wikipedia.org/wiki/Human%20nutrition,"Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development. Overview The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces). Nutrients The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins. The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called ""Calories"" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov" https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20quantum%20theory,"This is a list of mathematical topics in quantum theory, by Wikipedia page. See also list of functional analysis topics, list of Lie group topics, list of quantum-mechanical systems with analytical solutions. Mathematical formulation of quantum mechanics bra–ket notation canonical commutation relation complete set of commuting observables Heisenberg picture Hilbert space Interaction picture Measurement in quantum mechanics quantum field theory quantum logic quantum operation Schrödinger picture semiclassical statistical ensemble wavefunction wave–particle duality Wightman axioms WKB approximation Schrödinger equation quantum mechanics, matrix mechanics, Hamiltonian (quantum mechanics) particle in a box particle in a ring particle in a spherically symmetric potential quantum harmonic oscillator hydrogen atom ring wave guide particle in a one-dimensional lattice (periodic potential) Fock symmetry in theory of hydrogen Symmetry identical particles angular momentum angular momentum operator rotational invariance rotational symmetry rotation operator translational symmetry Lorentz symmetry Parity transformation Noether's theorem Noether charge Spin (physics) isospin Aman matrices scale invariance spontaneous symmetry breaking supersymmetry breaking Quantum states quantum number Pauli exclusion principle quantum indeterminacy uncertainty principle wavefunction collapse zero-point energy bound state coherent state squeezed coherent state density state Fock state, Fock space vacuum state quasinormal mode no-cloning theorem quantum entanglement Dirac equation spinor, spinor group, spinor bundle Dirac sea Spin foam Poincaré group gamma matrices Dirac adjoint Wigner's classification anyon Interpretations of quantum mechanics Copenhagen interpretation locality principle Bell's theorem Bell test loopholes CHSH inequality hidden variable theory path integral formulation, quantum action Bohm interp" https://en.wikipedia.org/wiki/Road%20coloring%20theorem,"In graph theory the road coloring theorem, known previously as the road coloring conjecture, deals with synchronized instructions. The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze). In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics. The theorem was first conjectured by Roy Adler and Benjamin Weiss. It was proved by Avraham Trahtman. Example and intuition The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring. For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk ""blue-red-red—blue-red-red—blue-red-red"", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk ""blue-blue-red—blue-blue-red—blue-blue-red"", you will always end up at the vertex marked in green, no matter where you started. The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring. Mathematical description Let G be a finite, strongly connected, directed graph where all the vertices have the same out-degree k. Let A be the alphabet containing the letters 1, ..., k. A synchronizing coloring (also known as a collapsible coloring) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such" https://en.wikipedia.org/wiki/Integration%20appliance,"An integration appliance is a computer system specifically designed to lower the cost of integrating computer systems. Most integration appliances send or receive electronic messages from other computers that are exchanging electronic documents. Most Integration Appliances support XML messaging standards such as SOAP and Web services are frequently referred to as XML appliances and perform functions that can be grouped together as XML-Enabled Networking. Vendors providing integration appliances DataPower XI50 and IBM MQ Appliance — IBM Intel SOA Products Division Premier, Inc." https://en.wikipedia.org/wiki/Shift%20register%20lookup%20table,"A shift register lookup table, also shift register LUT or SRL, refers to a component in digital circuitry. It is essentially a shift register of variable length. The length of SRL is set by driving address pins high or low and can be changed dynamically, if necessary. The SRL component is used in FPGA devices. The SRL can be used as a programmable delay element. See also Lookup table Shift register" https://en.wikipedia.org/wiki/Fernando%20Zalamea,"Fernando Zalamea Traba (Bogota, 28 February 1959) is a Colombian mathematician, essayist, critic, philosopher and popularizer, known by his contributions to the philosophy of mathematics, being the creator of the synthetic philosophy of mathematics. He is the author of around twenty books and is one of the world's leading experts on the mathematical and philosophical work of Alexander Grothendieck, as well as in the logical work of Charles S. Peirce. Currently, he is a full professor in the Department of Mathematics of the National University of Colombia, where he has established a mathematical school, primarily through his ongoing seminar of epistemology, history and philosophy of mathematics, which he conducted for eleven years at the university. He is also known for his creative, critical, and constructive teaching of mathematics. Zalamea has supervised approximately 50 thesis projects at the undergraduate, master's and doctoral levels in various fields, including mathematics, philosophy, logic, category theory, semiology, medicine, culture, among others. Since 2018, he has been an honorary member of the Colombian Academy of Physical Exact Sciences and Natural. In 2016, he was recognized as one of the 100 most outstanding contemporary interdisciplinary global minds by ""100 Global Minds, the most daring cross-disciplinary thinkers in the world,"" being the only Latin American included in this recognition." https://en.wikipedia.org/wiki/List%20of%20PSPACE-complete%20problems,"Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive. Games and puzzles Generalized versions of: Amazons Atomix Checkers if a draw is forced after a polynomial number of non-jump moves Dyson Telescope Game Cross Purposes Geography Two-player game version of Instant Insanity Ko-free Go Ladder capturing in Go Gomoku Hex Konane Lemmings Node Kayles Poset Game Reversi River Crossing Rush Hour Finding optimal play in Mahjong solitaire Sokoban Super Mario Bros. Black Pebble game Black-White Pebble game Acyclic pebble game One-player pebble game Token on acyclic directed graph games: Logic Quantified boolean formulas First-order logic of equality Provability in intuitionistic propositional logic Satisfaction in modal logic S4 First-order theory of the natural numbers under the successor operation First-order theory of the natural numbers under the standard order First-order theory of the integers under the standard order First-order theory of well-ordered sets First-order theory of binary strings under lexicographic ordering First-order theory of a finite Boolean algebra Stochastic satisfiability Linear temporal logic satisfiability and model checking Lambda calculus Type inhabitation problem for simply typed lambda calculus Automata and language theory Circuit theory Integer circuit evaluation Automata theory Word problem for linear bounded automata Word problem for quasi-realtime automata Emptiness problem for a nondeterministic two-way finite state automaton Equivalence problem for nondeterministic finite automata Word problem and emptiness problem for non-erasing stack automata Emptiness of intersection of an unbounded number of deterministic finite automata A generalized version of Langton's Ant Minimizing nondeterministic finite automata Formal languages Word problem for context-sensitive language Intersect" https://en.wikipedia.org/wiki/List%20of%20states%20of%20matter,"States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions. Low-energy states of matter Classical states Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other. Amorphous solid: A solid in which there is no far-range order of the positions of the atoms. Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order. Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom. Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern. Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure. Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order. Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container. Modern states Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo" https://en.wikipedia.org/wiki/List%20of%20logic%20symbols,"In logic, a set of symbols is commonly used to express logical representation. The following table lists many common symbols, together with their name, how they should be read out loud, and the related field of mathematics. Additionally, the subsequent columns contains an informal explanation, a short example, the Unicode location, the name for use in HTML documents, and the LaTeX symbol. Basic logic symbols Advanced and rarely used logical symbols These symbols are sorted by their Unicode value: Usage in various countries Poland in Poland, the universal quantifier is sometimes written ∧, and the existential quantifier as ∨. The same applies for Germany. Japan The ⇒ symbol is often used in text to mean ""result"" or ""conclusion"", as in ""We examined whether to sell the product ⇒ We will not sell it"". Also, the → symbol is often used to denote ""changed to"", as in the sentence ""The interest rate changed. March 20% → April 21%"". See also Józef Maria Bocheński List of notation used in Principia Mathematica List of mathematical symbols Logic alphabet, a suggested set of logical symbols Logical connective Mathematical operators and symbols in Unicode Non-logical symbol Polish notation Truth function Truth table Wikipedia:WikiProject Logic/Standards for notation" https://en.wikipedia.org/wiki/Biomedicine,"Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners. Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century. It includes many biomedical disciplines and areas of specialty that typically contain the ""bio-"" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine. Overview Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy. Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy. Biomedicine involves the study of (patho-) physiological processes with methods from biology and " https://en.wikipedia.org/wiki/Sombrero%20function,"A sombrero function (sometimes called besinc function or jinc function) is the 2-dimensional polar coordinate analog of the sinc function, and is so-called because it is shaped like a sombrero hat. This function is frequently used in image processing. It can be defined through the Bessel function of the first kind () where . The normalization factor makes . Sometimes the factor is omitted, giving the following alternative definition: The factor of 2 is also often omitted, giving yet another definition and causing the function maximum to be 0.5: The Fourier transform of the 2D circle function () is a sombrero function. Thus a sombrero function also appears in the intensity profile of far-field diffraction through a circular aperture, known as an Airy disk." https://en.wikipedia.org/wiki/Narada%20multicast%20protocol,"The Narada multicast protocol is a set of specifications which can be used to implement overlay multicast functionality on computer networks. It constructs an overlay tree from a redundantly meshed graph of nodes, source specific shortest path trees are then constructed from reverse paths. The group management is equally distributed on all nodes because each overlay node keeps track of all its group members through periodic heartbeats of all members. The discovery and tree building is similar to DVMRP. External links ""An Evaluation of Three Application-Layer Multicast Protocols"" ""Overlay Multicast & Content distribution""" https://en.wikipedia.org/wiki/Time%20Cube,"Time Cube was a pseudoscientific personal web page founded in 1997 by the self-proclaimed ""wisest man on earth,"" Otis Eugene ""Gene"" Ray. It was a self-published outlet for Ray's ""theory of everything"", also called ""Time Cube,"" which polemically claims that all modern sciences are participating in a worldwide conspiracy to teach lies, by omitting his theory's alleged truth that each day actually consists of four days occurring simultaneously. Alongside these statements, Ray described himself as a ""godlike being with superior intelligence who has absolute evidence and proof"" for his views. Ray asserted repeatedly and variously that the academic world had not taken Time Cube seriously. Ray died on March 18, 2015, at the age of 87. His website domain names expired in August 2015, and Time Cube was last archived by the Wayback Machine on January 12, 2016 (January 10–14). Content Style The Time Cube website contained no home page. It consisted of a number of web pages that contained a single vertical centre-aligned column of body text in various sizes and colors, resulting in extremely long main pages. Finding any particular passage was almost impossible without manually searching. A large amount of self-invented jargon is used throughout: some words and phrases are used frequently but never defined, likely terms materially referring to the weakness of widely propagated ideas that Ray detests throughout the text, and are usually capitalized even when used as adjectives. In one paragraph, he claimed that his own wisdom ""so antiquates known knowledge"" that a psychiatrist examining his behavior diagnosed him with schizophrenia. Various commentators have asserted that it is futile to analyze the text rationally, interpret meaningful proofs from the text, or test any claims. Time Cube concept Ray's personal model of reality, called ""Time Cube"", states that all of modern physics and education is wrong, and argues that, among many other things, Greenwich Time is a global " https://en.wikipedia.org/wiki/Artificial%20brain,"An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating ""artificial brains"" and brain emulation plays three important roles in science: An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience. A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being. A long-term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI. An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create ""neurospheres"" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease. The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus's critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper ""Computing Machinery and Intelligence"". The third objective is generally called artificial general intelligence by researchers. However, Ray Kurzweil prefers the term ""strong AI"". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on groun" https://en.wikipedia.org/wiki/Approximate%20max-flow%20min-cut%20theorem,"Approximate max-flow min-cut theorems are mathematical propositions in network flow theory. They deal with the relationship between maximum flow rate (""max-flow"") and minimum cut (""min-cut"") in a multi-commodity flow problem. The theorems have enabled the development of approximation algorithms for use in graph partition and related problems. Multicommodity flow problem A ""commodity"" in a network flow problem is a pair of source and sink nodes. In a multi-commodity flow problem, there are commodities, each with its own source , sink , and demand . The objective is to simultaneously route units of commodity from to for each , such that the total amount of all commodities passing through any edge is no greater than its capacity. (In the case of undirected edges, the sum of the flows in both directions cannot exceed the capacity of the edge). Specially, a 1-commodity (or single commodity) flow problem is also known as a maximum flow problem. According to the Ford–Fulkerson algorithm, the max-flow and min-cut are always equal in a 1-commodity flow problem. Max-flow and min-cut In a multicommodity flow problem, max-flow is the maximum value of , where is the common fraction of each commodity that is routed, such that units of commodity can be simultaneously routed for each without violating any capacity constraints. min-cut is the minimum of all cuts of the ratio of the capacity of the cut to the demand of the cut. Max-flow is always upper bounded by the min-cut for a multicommodity flow problem. Uniform multicommodity flow problem In a uniform multicommodity flow problem, there is a commodity for every pair of nodes and the demand for every commodity is the same. (Without loss of generality, the demand for every commodity is set to one.) The underlying network and capacities are arbitrary. Product multicommodity flow problem In a product multicommodity flow problem, there is a nonnegative weight for each node in graph . The demand for the commodity betwee" https://en.wikipedia.org/wiki/Quality%20of%20results,"Quality of Results (QoR) is a term used in evaluating technological processes. It is generally represented as a vector of components, with the special case of uni-dimensional value as a synthetic measure. History The term was coined by the Electronic Design Automation (EDA) industry in the late 1980s. QoR was meant to be an indicator of the performance of integrated circuits (chips), and initially measured the area and speed of a chip. As the industry evolved, new chip parameters were considered for coverage by the QoR, illustrating new areas of focus for chip designers (for example power dissipation, power efficiency, routing overhead, etc.). Because of the broad scope of quality assessment, QoR eventually evolved into a generic vector representation comprising a number of different values, where the meaning of each vector value was explicitly specified in the QoR analysis document. Currently the term is gaining popularity in other sectors of technology, with each sector using its own appropriate components. Current trends in EDA Originally, the QoR was used to specify absolute values such as chip area, power dissipation, speed, etc. (for example, a QoR could be specified as a {100 MHz, 1W, 1 mm²} vector), and could only be used for comparing the different achievements of a single design specification. The current trend among designers is to include normalized values in the QoR vector, such that they will remain meaningful for a longer period of time (as technologies change), and/or across broad classes of design. For example, one often uses – as a QoR component – a number representing the ratio between the area required by a combinational logic block and the area required by a simple logic gate, this number being often referred to as ""relative density of combinational logic"". In this case, a relative density of five will generally be accepted as a good quality of result – relative density of combinational logic component – while a relative density of fifty will" https://en.wikipedia.org/wiki/Outline%20of%20category%20theory,"The following outline is provided as an overview of and guide to category theory, the area of study in mathematics that examines in an abstract way the properties of particular mathematical concepts, by formalising them as collections of objects and arrows (also called morphisms, although this term also has a specific, non category-theoretical sense), where these collections satisfy certain basic conditions. Many significant areas of mathematics can be formalised as categories, and the use of category theory allows many intricate and subtle mathematical results in these fields to be stated, and proved, in a much simpler way than without the use of categories. Essence of category theory Category Functor Natural transformation Branches of category theory Homological algebra Diagram chasing Topos theory Enriched category theory Higher category theory Categorical logic Specific categories Category of sets Concrete category Category of vector spaces Category of graded vector spaces Category of chain complexes Category of finite dimensional Hilbert spaces Category of sets and relations Category of topological spaces Category of metric spaces Category of preordered sets Category of groups Category of abelian groups Category of rings Category of magmas Category of medial magmas Objects Initial object Terminal object Zero object Subobject Group object Magma object Natural number object Exponential object Morphisms Epimorphism Monomorphism Zero morphism Normal morphism Dual (category theory) Groupoid Image (category theory) Coimage Commutative diagram Cartesian morphism Slice category Functors Isomorphism of categories Natural transformation Equivalence of categories Subcategory Faithful functor Full functor Forgetful functor Yoneda lemma Representable functor Functor category Adjoint functors Galois connection Pontryagin duality Affine scheme Monad (category theory) Comonad Combinatorial species E" https://en.wikipedia.org/wiki/Morse%20Micro,"Morse Micro is a Sydney-based developer of Wi-Fi HaLow microprocessors; chips that enable high data rates, with long range and low power consumption. Amongst all Wi-Fi HaLow systems on a chip, Morse Micro processors are reported to be the smallest, fastest, longest-range with lowest-power-use. The main application of the technology is machine-to-machine communications. With the Internet of things expected to extend to 30 billion devices by 2025, this represents a steeply growing number of users of the technology. The founders plan to be part of ""expanding Wi-Fi so it can go into everything, every smoke alarm, every camera."" The firm has its global HQ in Sydney, which is also its main base for R&D, with additional centres in India, China and the United States. As of 2022, Morse Micro was producing more semiconductors than any other Australian-based tech company. Technology After eight years' development, the company's Wifi HalLow processor was reported to deliver 10 times the range of conventional Wi-Fi technology, and able to function for several years before needing battery change. Data rates and range The microprocessor allows for a range of data rates, depending on the modulation and coding scheme (MCS) used. This can be as low as 150 kilobytes per second using MCS10 with BPSK modulation, to a top rate of 4 megabytes per second using MCS9 at 256 quadrature amplitude modulation. The chip uses low-bandwidth wireless network protocols, operating in the 1 GHz spectrum, while providing a communications range of 1,000 metres. In one field test, researchers found the technology could sustain high speed data transmission between a device placed by the north end of Sydney Harbour Bridge and a device across the harbour at Sydney Opera House. The company claims their chip provides 10 times the range, 100 times the area and 1000 times the volume of data offered by traditional wi-fi. Connectivity and energy To enable networked communications between machines, a sing" https://en.wikipedia.org/wiki/CMD640,"CMD640, the California Micro Devices Technology Inc product 0640, is an IDE interface chip for the PCI and VLB buses. CMD640 had some sort of hardware acceleration: WDMA and Read-Ahead (prefetch) support. CMD Technology Inc was acquired by Silicon Image Inc. in 2001. Hardware bug The original CMD640 has data corruption bugs, some of which remained in CMD646. The data corruption bug is similar to the bug affecting the contemporaneous PC Tech (a subsidiary of Zeos) RZ1000 chipset. Both chipsets were used on a number of motherboards, including those from Intel. Мodern operating systems have a workaround for this bug by prohibiting aggressive acceleration mode and losing about 10% of the performance." https://en.wikipedia.org/wiki/Period%20%28algebraic%20geometry%29,"In algebraic geometry, a period is a number that can be expressed as an integral of an algebraic function over an algebraic domain. Sums and products of periods remain periods, such that the periods form a ring. Maxim Kontsevich and Don Zagier gave a survey of periods and introduced some conjectures about them. Periods also arise in computing the integrals that arise from Feynman diagrams, and there has been intensive work trying to understand the connections. Definition A real number is a period if it is of the form where is a polynomial and a rational function on with rational coefficients. A complex number is a period if its real and imaginary parts are periods. An alternative definition allows and to be algebraic functions; this looks more general, but is equivalent. The coefficients of the rational functions and polynomials can also be generalised to algebraic numbers because irrational algebraic numbers are expressible in terms of areas of suitable domains. In the other direction, can be restricted to be the constant function or , by replacing the integrand with an integral of over a region defined by a polynomial in additional variables. In other words, a (nonnegative) period is the volume of a region in defined by a polynomial inequality. Examples Besides the algebraic numbers, the following numbers are known to be periods: The natural logarithm of any positive algebraic number a, which is Elliptic integrals with rational arguments All zeta constants (the Riemann zeta function of an integer) and multiple zeta values Special values of hypergeometric functions at algebraic arguments Γ(p/q)q for natural numbers p and q. An example of a real number that is not a period is given by Chaitin's constant Ω. Any other non-computable number also gives an example of a real number that is not a period. Currently there are no natural examples of computable numbers that have been proved not to be periods, however it is possible to construct artif" https://en.wikipedia.org/wiki/History%20of%20mathematical%20notation,"The history of mathematical notation includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators. The history includes Hindu–Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries. The development of mathematical notation can be divided in stages: The ""rhetorical"" stage is where calculations are performed by words and no symbols are used. The ""syncopated"" stage is where frequently used operations and quantities are represented by symbolic syntactical abbreviations. From ancient times through the post-classical age, bursts of mathematical creativity were often followed by centuries of stagnation. As the early modern age opened and the worldwide spread of knowledge began, written examples of mathematical developments came to light. The ""symbolic"" stage is where comprehensive systems of notation supersede rhetoric. Beginning in Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries were made at an increasing pace that continues through the present day. This symbolic system was in use by medieval Indian mathematicians and in Europe since the middle of the 17th century, and has continued to develop in the contemporary era. The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and the focus here, the investigation into the mathematical methods and notation of the past. Rhetorical stage Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to i" https://en.wikipedia.org/wiki/List%20of%20physics%20journals,"This is a list of physics journals with existing articles on Wikipedia. The list is organized by subfields of physics. By subject General Astrophysics Atomic, molecular, and optical physics European Physical Journal D Journal of Physics B Laser Physics Molecular Physics Physical Review A Plasmas Measurement Measurement Science and Technology Metrologia Review of Scientific Instruments Nuclear and particle physics Optics Computational physics Computational Materials Science Computer Physics Communications International Journal of Modern Physics C (computational physics, physical computations) Journal of Computational Physics Physical Review E, section E13 Communications in Computational Physics Condensed matter and materials science Low temperature physics Journal of Low Temperature Physics Low Temperature Physics Chemical physics Chemical Physics Letters Journal of Chemical Physics Journal of Physical Chemistry A Journal of Physical Chemistry B Journal of Physical Chemistry C Journal of Physical Chemistry Letters Physical Chemistry Chemical Physics Soft matter physics European Physical Journal E Journal of Polymer Science Part B Soft Matter Medical physics Australasian Physical & Engineering Sciences in Medicine BMC Medical Physics Bioelectromagnetics Health Physics Journal of Medical Physics Magnetic Resonance in Medicine Medical Physics Physics in Medicine and Biology Biological physics Annual Review of Biophysics Biochemical and Biophysical Research Communications Biophysical Journal Biophysical Reviews and Letters Doklady Biochemistry and Biophysics European Biophysics Journal International Journal of Biological Macromolecules Physical Biology Radiation and Environmental Biophysics Statistical and nonlinear physics Theoretical and mathematical physics Quantum information Quantum Journal of Quantum Information Science International Journal of Quantum Information npj Quantum Information Geophysic" https://en.wikipedia.org/wiki/Double%20counting%20%28proof%20technique%29,"In combinatorics, double counting, also called counting in two ways, is a combinatorial proof technique for showing that two expressions are equal by demonstrating that they are two ways of counting the size of one set. In this technique, which call ""one of the most important tools in combinatorics"", one describes a finite set from two perspectives leading to two distinct expressions for the size of the set. Since both expressions equal the size of the same set, they equal each other. Examples Multiplication (of natural numbers) commutes This is a simple example of double counting, often used when teaching multiplication to young children. In this context, multiplication of natural numbers is introduced as repeated addition, and is then shown to be commutative by counting, in two different ways, a number of items arranged in a rectangular grid. Suppose the grid has rows and columns. We first count the items by summing rows of items each, then a second time by summing columns of items each, thus showing that, for these particular values of and , . Forming committees One example of the double counting method counts the number of ways in which a committee can be formed from people, allowing any number of the people (even zero of them) to be part of the committee. That is, one counts the number of subsets that an -element set may have. One method for forming a committee is to ask each person to choose whether or not to join it. Each person has two choices – yes or no – and these choices are independent of those of the other people. Therefore there are possibilities. Alternatively, one may observe that the size of the committee must be some number between 0 and . For each possible size , the number of ways in which a committee of people can be formed from people is the binomial coefficient Therefore the total number of possible committees is the sum of binomial coefficients over . Equating the two expressions gives the identity a special case of the b" https://en.wikipedia.org/wiki/Outline%20of%20geometry,"Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. Geometry is one of the oldest mathematical sciences. Classical branches Geometry Analytic geometry Differential geometry Euclidean geometry Non-Euclidean geometry Projective geometry Riemannian geometry Contemporary branches Absolute geometry Affine geometry Archimedes' use of infinitesimals Birational geometry Complex geometry Combinatorial geometry Computational geometry Conformal geometry Constructive solid geometry Contact geometry Convex geometry Descriptive geometry Digital geometry Discrete geometry Distance geometry Elliptic geometry Enumerative geometry Epipolar geometry Finite geometry Geometry of numbers Hyperbolic geometry Incidence geometry Information geometry Integral geometry Inversive geometry Klein geometry Lie sphere geometry Numerical geometry Ordered geometry Parabolic geometry Plane geometry Quantum geometry Ruppeiner geometry Spherical geometry Symplectic geometry Synthetic geometry Systolic geometry Taxicab geometry Toric geometry Transformation geometry Tropical geometry History of geometry History of geometry Timeline of geometry Babylonian geometry Egyptian geometry Ancient Greek geometry Euclidean geometry Pythagorean theorem Euclid's Elements Measurement of a Circle Indian mathematics Bakhshali manuscript Modern geometry History of analytic geometry History of the Cartesian coordinate system History of non-Euclidean geometry History of topology History of algebraic geometry General geometry concepts General concepts Geometric progression — Geometric shape — Geometry — Pi — angular velocity — linear velocity — De Moivre's theorem — parallelogram rule — Pythagorean theorem — similar triangles — trigonometric identity — unit circle — Trapezoid — Triangle — Theorem — point — ray — plane — line — line segment Measurements Bearing A" https://en.wikipedia.org/wiki/Greek%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering,"Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used. The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters. In mathematical finance, the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments. Typography The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography. The OpenType font format has the feature tag ""mgrk"" (""Mathematical Greek"") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts. The table below shows a comparison of Greek letters rendered in TeX and HTML. The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics. Concepts represented by a Greek letter Αα (alpha) repr" https://en.wikipedia.org/wiki/Geometry%20template,"A geometry template is a piece of clear plastic with cut-out shapes for use in mathematics and other subjects in primary school through secondary school. It also has various measurements on its sides to be used like a ruler. In Australia, popular brands include Mathomat and MathAid. Brands Mathomat and Mathaid Mathomat is a trademark used for a plastic stencil developed in Australia by Craig Young in 1969, who originally worked as an engineering tradesperson in the Government Aircraft Factories (GAF) in Melbourne before retraining and working as head of mathematics in a secondary school in Melbourne. Young designed Mathomat to address what he perceived as limitations of traditional mathematics drawing sets in classrooms, mainly caused by students losing parts of the sets. The Mathomat stencil has a large number of geometric shapes stencils combined with the functions of a technical drawing set (rulers, set squares, protractor and circles stencils to replace a compass). The template made use polycarbonate – a new type of thermoplastic polymer when Mathomat first came out – which was strong and transparent enough to allow for a large number of stencil shapes to be included in its design without breaking or tearing. The first template was exhibited in 1970 at a mathematics conference in Melbourne along with a series of popular mathematics teaching lesson plan; it became an immediate success with a large number of schools specifying it as a required students purchase. As of 2017, the stencil is widely specified in Australian schools, chiefly for students at early secondary school level. The manufacturing of Mathomat was taken over in 1989 by the W&G drawing instrument company, which had a factory in Melbourne for manufacture of technical drawing instruments. Young also developed MathAid, which was initially produced by him when he was living in Ringwood, Victoria. He later sold the company. W&G published a series of teacher resource books for Mathomat authored by " https://en.wikipedia.org/wiki/Heuristic,"A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation in a search space. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision. Examples that employ heuristics include using trial and error, a rule of thumb or an educated guess. Heuristics are the strategies derived from previous experiences with similar problems. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. When an individual applies a heuristic in practice, it generally performs as expected. However it can alternatively create systematic errors. The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. In mathematics, some common heuristics involve the use of visual representations, additional assumptions, forward/backward reasoning and simplification. Here are a few commonly used heuristics from George Pólya's 1945 book, How to Solve It: In psychology, heuristics are simple, efficient rules, either learned or inculcated by evolutionary processes. These psychological heuristics have been proposed to explain how people make decisions, come to judgements, and solve problems. These rules typically come into play when people face complex problems or incomplete information. Researchers employ various methods to test whether people use these rules. The rules have been shown to work well under most circumstances, but in certain cases can lead to systematic errors or cognitive biases. Hist" https://en.wikipedia.org/wiki/Outline%20of%20actuarial%20science,"The following outline is provided as an overview of and topical guide to actuarial science: Actuarial science – discipline that applies mathematical and statistical methods to assess risk in the insurance and finance industries. What type of thing is actuarial science? Actuarial science can be described as all of the following: An academic discipline – A branch of science – An applied science – A subdiscipline of statistics – Essence of actuarial science Actuarial science Actuary Actuarial notation Fields in which actuarial science is applied Mathematical finance Insurance, especially: Life insurance Health insurance Human resource consulting History of actuarial science History of actuarial science General actuarial science concepts Insurance Health insurance Life Insurance Life insurance Life insurer Insurable interest Insurable risk Annuity Life annuity Perpetuity New Business Strain Zillmerisation Financial reinsurance Net premium valuation Gross premium valuation Embedded value European Embedded Value Stochastic modelling Asset liability modelling Non-life Insurance Property insurance Casualty insurance Vehicle insurance Ruin theory Stochastic modelling Risk and capital management in non-life insurance Reinsurance Reinsurance Financial reinsurance Reinsurance Actuarial Premium Reinsurer Investments & Asset Management Dividend yield PE ratio Bond valuation Yield to maturity Cost of capital Net asset value Derivatives Mathematics of Finance Financial mathematics Interest Time value of money Discounting Present value Future value Net present value Internal rate of return Yield curve Yield to maturity Effective annual rate (EAR) Annual percentage rate (APR) Mortality Force of mortality Life table Pensions Pensions Stochastic modelling Other Enterprise risk management Fictional actuaries Persons influential in the field of actuarial science List of actuaries See also In" https://en.wikipedia.org/wiki/Strict,"In mathematical writing, the term strict refers to the property of excluding equality and equivalence and often occurs in the context of inequality and monotonic functions. It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict, which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word ""proper"" can also be used as a mathematical synonym for ""strict"". Use This term is commonly used in the context of inequalities — the phrase ""strictly less than"" means ""less than and not equal to"" (likewise ""strictly greater than"" means ""greater than and not equal to""). More generally, a strict partial order, strict total order, and strict weak order exclude equality and equivalence. When comparing numbers to zero, the phrases ""strictly positive"" and ""strictly negative"" mean ""positive and not equal to zero"" and ""negative and not equal to zero"", respectively. In the context of functions, the adverb ""strictly"" is used to modify the terms ""monotonic"", ""increasing"", and ""decreasing"". On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases ""non-negative"", ""non-positive"", ""non-increasing"", and ""non-decreasing"" to make it clear that the inclusive sense of the terms is being used. The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase ""x is positive"", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing ""x is strictly positive"" for x > 0, and ""x is non-negative"" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.) The word ""proper"" is often used in the same way as ""strict"". For example, a ""proper subset"" of " https://en.wikipedia.org/wiki/Turing%27s%20proof,"Turing's proof is a proof by Alan Turing, first published in January 1937 with the title ""On Computable Numbers, with an Application to the "". It was the second proof (after Church's theorem) of the negation of Hilbert's ; that is, the conjecture that some purely mathematical yes–no questions can never be answered by computation; more technically, that some decision problems are ""undecidable"" in the sense that there is no single algorithm that infallibly gives a correct ""yes"" or ""no"" answer to each instance of the problem. In Turing's own words: ""what I shall prove is quite different from the well-known results of Gödel ... I shall now show that there is no general method which tells whether a given formula U is provable in K [Principia Mathematica]"". Turing followed this proof with two others. The second and third both rely on the first. All rely on his development of typewriter-like ""computing machines"" that obey a simple set of rules and his subsequent development of a ""universal computing machine"". Summary of the proofs In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof. His first theorem is most relevant to the halting problem, the second is more relevant to Rice's theorem. First proof: that no ""computing machine"" exists that can decide whether or not an arbitrary ""computing machine"" (as represented by an integer 1, 2, 3, . . .) is ""circle-free"" (i.e. goes on printing its number in binary ad infinitum): ""...we have no general process for doing this in a finite number of steps"" (p. 132, ibid.). Turing's proof, although it seems to use the ""diagonal process"", in fact shows that his machine (called H) cannot calculate its own number, let alone the entire diagonal number (Cantor's diagonal argument): ""The fallacy in the argument lies in the assumption that B [the diagonal number] is computable"" The proof does not require much mathematics. Second proof: This one is perhaps more f" https://en.wikipedia.org/wiki/Divisibility%20rule,"A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 ""Mathematical Games"" column in Scientific American. Divisibility rules for numbers 1–30 The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last n digits) the result must be examined by other means. For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits. To test the divisibility of a number by a power of 2 or a power of 5 (2n or 5n, in which n is a positive integer), one only need to look at the last n digits of that number. To test divisibility by any number expressed as the product of prime factors , we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 (24 = 8×3 = 23×3) is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24. Step-by-step examples Divisibility by 2 First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divis" https://en.wikipedia.org/wiki/BullSequana,"BullSequana is the brand name of a range of high performance computer systems produced by Atos. The range includes BullSequana S series - a modular compute platform optimised for AI and GPU-intensive tasks. BullSequana X series - supercomputers which are claimed to operate at exascale" https://en.wikipedia.org/wiki/Pulse%20duration,"In signal processing and telecommunication, pulse duration is the interval between the time, during the first transition, that the amplitude of the pulse reaches a specified fraction (level) of its final amplitude, and the time the pulse amplitude drops, on the last transition, to the same level. The interval between the 50% points of the final amplitude is usually used to determine or define pulse duration, and this is understood to be the case unless otherwise specified. Other fractions of the final amplitude, e.g., 90% or 1/e, may also be used, as may the root mean square (rms) value of the pulse amplitude. In radar, the pulse duration is the time the radar's transmitter is energized during each cycle." https://en.wikipedia.org/wiki/List%20of%20graphs,"This partial list of graphs contains definitions of graphs and graph families. For collected definitions of graph theory terms that do not refer to individual graph types, such as vertex and path, see Glossary of graph theory. For links to existing articles about particular kinds of graphs, see Category:Graphs. Some of the finite structures considered in graph theory have names, sometimes inspired by the graph's topology, and sometimes after their discoverer. A famous example is the Petersen graph, a concrete graph on 10 vertices that appears as a minimal example or counterexample in many different contexts. Individual graphs Highly symmetric graphs Strongly regular graphs The strongly regular graph on v vertices and rank k is usually denoted srg(v,k,λ,μ). Symmetric graphs A symmetric graph is one in which there is a symmetry (graph automorphism) taking any ordered pair of adjacent vertices to any other ordered pair; the Foster census lists all small symmetric 3-regular graphs. Every strongly regular graph is symmetric, but not vice versa. Semi-symmetric graphs Graph families Complete graphs The complete graph on vertices is often called the -clique and usually denoted , from German komplett. Complete bipartite graphs The complete bipartite graph is usually denoted . For see the section on star graphs. The graph equals the 4-cycle (the square) introduced below. Cycles The cycle graph on vertices is called the n-cycle and usually denoted . It is also called a cyclic graph, a polygon or the n-gon. Special cases are the triangle , the square , and then several with Greek naming pentagon , hexagon , etc. Friendship graphs The friendship graph Fn can be constructed by joining n copies of the cycle graph C3 with a common vertex. Fullerene graphs In graph theory, the term fullerene refers to any 3-regular, planar graph with all faces of size 5 or 6 (including the external face). It follows from Euler's polyhedron formula, V – E + F = 2 (where V, E, F indic" https://en.wikipedia.org/wiki/Animalia%20Paradoxa,"(Latin for ""contradictory animals""; cf. paradox) are the mythical, magical or otherwise suspect animals mentioned in the first five editions of Carl Linnaeus's seminal work under the header ""Paradoxa"". It lists fantastic creatures found in medieval bestiaries and some animals reported by explorers from abroad and explains why they are excluded from Systema Naturae. According to Swedish historian Gunnar Broberg, it was to offer a natural explanation and demystify the world of superstition. Paradoxa was dropped from Linnaeus' classification system as of the 6th edition (1748). Paradoxa These 10 taxa appear in the 1st to 5th editions: Hydra: Linnaeus wrote: ""Hydra: body of a snake, with two feet, seven necks and the same number of heads, lacking wings, preserved in Hamburg, similar to the description of the Hydra of the Apocalypse of St.John chapters 12 and 13. And it is provided by very many as a true species of animal, but falsely. Nature for itself and always the similar, never naturally makes multiple heads on one body. Fraud and artifice, as we ourselves saw [on it] teeth of a weasel, different from teeth of an Amphibian [or reptile], easily detected."" See Carl Linnaeus#Doctorate. (Distinguish from the small real coelenterate Hydra (genus).) Rana-Piscis: a South American frog which is significantly smaller than its tadpole stage; it was thus (incorrectly) reported to Linnaeus that the metamorphosis in this species went from 'frog to fish'. In the Paradoxa in the 1st edition of Systema Naturae, Linnaeus wrote ""Frog-Fish or Frog Changing into Fish: is much against teaching. Frogs, like all Amphibia, delight in lungs and spiny bones. Spiny fish, instead of lungs, are equipped with gills. Therefore the laws of Nature will be against this change. If indeed a fish is equipped with gills, it will be separate from the Frog and Amphibia. If truly [it has] lungs, it will be a Lizard: for under all the sky it differs from Chondropterygii and Plagiuri."" In the 10th editi" https://en.wikipedia.org/wiki/Experimental%20biology,"Experimental biology is the set of approaches in the field of biology concerned with the conduction of experiments to investigate and understand biological phenomena. The term is opposed to theoretical biology which is concerned with the mathematical modelling and abstractions of the biological systems. Due to the complexity of the investigated systems, biology is primarily an experimental science. However, as a consequence of the modern increase in computational power, it is now becoming more feasible to find approximate solutions and validate mathematical models of complex living organisms. The methods employed in experimental biology are numerous and of different nature including molecular, biochemical, biophysical, microscopical and microbiological. See :Category:Laboratory techniques for a list of biological experimental techniques. Gallery" https://en.wikipedia.org/wiki/Species,"A species () is often defined as the largest group of organisms in which any two individuals of the appropriate sexes or mating types can produce fertile offspring, typically by sexual reproduction. It is the basic unit of classification and a taxonomic rank of an organism, as well as a unit of biodiversity. Other ways of defining species include their karyotype, DNA sequence, morphology, behaviour, or ecological niche. In addition, paleontologists use the concept of the chronospecies since fossil reproduction cannot be examined. The most recent rigorous estimate for the total number of species of eukaryotes is between 8 and 8.7 million. About 14% of these had been described by 2011. All species (except viruses) are given a two-part name, a ""binomial"". The first part of a binomial is the genus to which the species belongs. The second part is called the specific name or the specific epithet (in botanical nomenclature, also sometimes in zoological nomenclature). For example, Boa constrictor is one of the species of the genus Boa, with constrictor being the species' epithet. While the definitions given above may seem adequate at first glance, when looked at more closely they represent problematic species concepts. For example, the boundaries between closely related species become unclear with hybridisation, in a species complex of hundreds of similar microspecies, and in a ring species. Also, among organisms that reproduce only asexually, the concept of a reproductive species breaks down, and each clone is potentially a microspecies. Although none of these are entirely satisfactory definitions, and while the concept of species may not be a perfect model of life, it is still a useful tool to scientists and conservationists for studying life on Earth, regardless of the theoretical difficulties. If species were fixed and clearly distinct from one another, there would be no problem, but evolutionary processes cause species to change. This obliges taxonomists to decide, " https://en.wikipedia.org/wiki/List%20of%20minerals%20by%20optical%20properties," See also List of minerals" https://en.wikipedia.org/wiki/Simultaneous%20multithreading,"Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern processor architectures. Details The term multithreading is ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with different page tables, different task state segments, different protection rings, different I/O permissions, etc.). Although running on the same core, they are completely separated from each other. Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors. Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading (also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support up to eight concurrent threads per core. Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measured energy efficiency of SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors. In " https://en.wikipedia.org/wiki/Delay-locked%20loop,"In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line. A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit. The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal. Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block. A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements. The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required. A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself. In the Control Systems jargon, th" https://en.wikipedia.org/wiki/Dell%20M1000e,"The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches. Enclosure The M1000e fits in a 19-inch rack and is 10 rack units high (44 cm), 17.6"" (44.7 cm) wide and 29.7"" (75.4 cm) deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg. On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and the KVM switch. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server. In June 2013 Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors. In 2018 Dell introduced the Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures. The M1000e enclosure has a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules. Midplane The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back. The original midplane 1.0 capabilities are Fabric A - Ethernet" https://en.wikipedia.org/wiki/Semiconductor%20Chip%20Protection%20Act%20of%201984,"The Semiconductor Chip Protection Act of 1984 (or SCPA) is an act of the US Congress that makes the layouts of integrated circuits legally protected upon registration, and hence illegal to copy without permission. It is an integrated circuit layout design protection law. Background Prior to 1984, it was not necessarily illegal to produce a competing chip with an identical layout. As the legislative history for the SCPA explained, patent and copyright protection for chip layouts, chip topographies, was largely unavailable. This led to considerable complaint by American chip manufacturers—notably, Intel, which, along with the Semiconductor Industry Association (SIA), took the lead in seeking remedial legislation—against what they termed ""chip piracy."" During the hearings that led to enactment of the SCPA, chip industry representatives asserted that a pirate could copy a chip design for $100,000 in 3 to 5 months that had cost its original manufacturer upwards of $1 million to design. Enactment of US and other national legislation In 1984 the United States enacted the Semiconductor Chip Protection Act of 1984 (the SCPA) to protect the topography of semiconductor chips. The SCPA is found in title 17, U.S. Code, sections 901-914 ( 17 U.S.C. §§ 901-914). Japan and European Community (EC) countries soon followed suit and enacted their own, similar laws protecting the topography of semiconductor chips. Chip topographies are also protected by TRIPS, an international treaty. How the SCPA operates Sui generis law Although the U.S. SCPA is codified in title 17 (copyrights), the SCPA is not a copyright or patent law. Rather, it is a sui generis law resembling a utility model law or Gebrauchsmuster. It has some aspects of copyright law, some aspects of patent law, and in some ways, it is completely different from either. From Brooktree, ¶ 23: The Semiconductor Chip Protection Act of 1984 was an innovative solution to this new problem of technology-based industry. While" https://en.wikipedia.org/wiki/Noise-domain%20reflectometry,"Noise-domain reflectometry is a type of reflectometry where the reflectometer exploits existing data signals on wiring and does not have to generate any signals itself. Noise-domain reflectometry, like time-domain and spread-spectrum time domain reflectometers, is most often used in identifying the location of wire faults in electrical lines. Time-domain reflectometers work by generating a signal and then sending that signal down the wireline and examining the reflected signal. Noise-domain reflectometers (NDRs) provide the benefit of locating wire faults without introducing an external signal because the NDR examines the existing signals on the line to identify wire faults. This technique is particularly useful in the testing of live wires where data integrity on the wires is critical. For example, NDRs can be used for monitoring aircraft wiring while in flight. See also Spread-spectrum time-domain reflectometry Time-domain reflectometry" https://en.wikipedia.org/wiki/Mesh%20analysis,"Mesh analysis (or the mesh current method) is a method that is used to solve planar circuits for the currents (and indirectly the voltages) at any place in the electrical circuit. Planar circuits are circuits that can be drawn on a plane surface with no wires crossing each other. A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be applied to any circuit, planar or not. Mesh analysis and loop analysis both make use of Kirchhoff’s voltage law to arrive at a set of equations guaranteed to be solvable if the circuit has a solution. Mesh analysis is usually easier to use when the circuit is planar, compared to loop analysis. Mesh currents and essential meshes Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes (also referred to as independent meshes). An essential mesh is a loop in the circuit that does not contain any other loop. Figure 1 labels the essential meshes with one, two, and three. A mesh current is a current that loops around the essential mesh and the equations are solved in terms of them. A mesh current may not correspond to any physically flowing current, but the physical currents are easily found from them. It is usual practice to have all the mesh currents loop in the same direction. This helps prevent errors when writing out the equations. The convention is to have all the mesh currents looping in a clockwise direction. Figure 2 shows the same circuit from Figure 1 with the mesh currents labeled. Solving for mesh currents instead of directly applying Kirchhoff's current law and Kirchhoff's voltage law can greatly reduce the amount of calculation required. This is because there are fewer mesh currents than there are physical branch currents. In figure 2 for example, there are six branch currents but only three mesh currents. Setting up the equations Each mesh produces one equation. These equations are the sum of the voltage drops in a comple" https://en.wikipedia.org/wiki/Soil%20seed%20bank,"The soil seed bank is the natural storage of seeds, often dormant, within the soil of most ecosystems. The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology. Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals. Background Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate, while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed. Seed longevity Longevity of seeds is very var" https://en.wikipedia.org/wiki/Resilience%20%28mathematics%29,"In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley. A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation. The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems. History In 1973, Canadian ecologist C. S. Holling proposed a definition of resilience in the context of ecological systems. According to Holling, resilience is ""a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables"". Holling distinguished two types of resilience: engineering resilience and ecological resilience. Engineering resilience refers to the ability of a system to return to its original state after a disturbance, such as a bridge that can be repaired after an earthquake. Ecological resilience, on the other hand, refers to the ability of a system to maintain its identity and function despite a disturbance, such as a forest that can regenerate after a wildfire while maintain" https://en.wikipedia.org/wiki/Biosphere,"The biosphere (from Greek βίος bíos ""life"" and σφαῖρα sphaira ""sphere""), also known as the ecosphere (from Greek οἶκος oîkos ""environment"" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term ""biosphere"" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term ""ecosystem"" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as " https://en.wikipedia.org/wiki/The%20COED%20Project,"The COED Project, or the COmmunications and EDiting Project, was an innovative software project created by the Computer Division of NOAA, US Department of Commerce in Boulder, Colorado in the 1970s. This project was designed, purchased and implemented by the in-house computing staff rather than any official organization. Intent The computer division previously had a history of frequently replacing its mainframe computers. Starting with a CDC 1604, then a CDC 3600, a couple of CDC 3800s, and finally a CDC 6600. The department also had an XDS 940 timesharing system which would support up to 32 users on dial-up modems. Due to rapidly changing requirements for computer resources, it was expected that new systems would be installed on a regular basis, and the resultant strain on the users to adapt to each new system was perceived to be excessive. The COED project was the result of a study group convened to solve this problem. The project was implemented by the computer specialists who were also responsible for the purchase, installation, and maintenance of all the computers in the division. COED was designed and implemented in long hours of overtime. The data communications aspect of the system was fully implemented and resulted in greatly improved access to the XDS 940 and CDC 6600 systems. It was also used as the front end of the - Free University of Amsterdam's SARA system for many years. Design A complete networked system was a pair of Modcomps: one II handled up to 256 communication ports, and one IV handled the disks and file editing. The system was designed to be fully redundant. If one pair failed the other automatically took over. All computer systems in the network were kept time-synchronized so that all file dates/times would be accurate - synchronized to the National Bureau of Standards atomic clock, housed in the same building. Another innovation was asynchronous dynamic speed recognition. After a terminal connected to a port, the user would type a Carr" https://en.wikipedia.org/wiki/Junos%20OS,"Junos OS (also known as Juniper Junos, Junos and JUNOS) is a FreeBSD-based network operating system used in Juniper Networks routing, switching and security devices. Versioning Junos OS was first made available on 7 July 1998, with new feature updates being released every quarter 2008. , the latest version is Junos OS 23.2, released on 23 June 2023. Architecture Junos operating system is primarily based on Linux and FreeBSD, with Linux running as bare metal, and FreeBSD running in a QEMU Virtual machine. Because FreeBSD is a Unix implementation, users can access a Unix shell and execute normal Unix commands. Junos runs on most or all Juniper hardware systems. After acquisition of NetScreen by Juniper Networks, Juniper integrated ScreenOS security functions into its own Junos network operating system. Junos OS has several architecture variations: Junos OS is based on the FreeBSD operating system and can run as a guest virtual machine (VM) on a Linux VM host. Junos OS Evolved, which runs native Linux and provides direct access to Linux utilities and operations. Both operating systems use the same command-line interface (CLI) user interface, the same applications and features and the same management and automation tools—but Junos OS evolved infrastructure has been entirely modernized to enable higher availability, accelerated deployment, greater innovation, and improved operational efficiencies. Features Junos SDK Junos's ecosystem includes a Software Development Kit (SDK) . Juniper Developer Network (JDN) provides the Junos SDK to 3rd-party developers who want to develop applications for Junos-powered devices such as Juniper Networks routers, switches, and service gateway systems. It provides a set of tools and application programming interfaces (APIs), including interfaces to Junos routing, firewall filter, UI and traffic services functions. Additionally, Junos SDK is used to develop other Juniper's products such as OpenFlow for Junos, and other traffic se" https://en.wikipedia.org/wiki/Jouanolou%27s%20trick,"In algebraic geometry, Jouanolou's trick is a theorem that asserts, for an algebraic variety X, the existence of a surjection with affine fibers from an affine variety W to X. The variety W is therefore homotopy-equivalent to X, but it has the technically advantageous property of being affine. Jouanolou's original statement of the theorem required that X be quasi-projective over an affine scheme, but this has since been considerably weakened. Jouanolou's construction Jouanolou's original statement was: If X is a scheme quasi-projective over an affine scheme, then there exists a vector bundle E over X and an affine E-torsor W. By the definition of a torsor, W comes with a surjective map to X and is Zariski-locally on X an affine space bundle. Jouanolou's proof used an explicit construction. Let S be an affine scheme and . Interpret the affine space as the space of (r + 1) × (r + 1) matrices over S. Within this affine space, there is a subvariety W consisting of idempotent matrices of rank one. The image of such a matrix is therefore a point in X, and the map that sends a matrix to the point corresponding to its image is the map claimed in the statement of the theorem. To show that this map has the desired properties, Jouanolou notes that there is a short exact sequence of vector bundles: where the first map is defined by multiplication by a basis of sections of and the second map is the cokernel. Jouanolou then asserts that W is a torsor for . Jouanolou deduces the theorem in general by reducing to the above case. If X is projective over an affine scheme S, then it admits a closed immersion into some projective space . Pulling back the variety W constructed above for along this immersion yields the desired variety W for X. Finally, if X is quasi-projective, then it may be realized as an open subscheme of a projective S-scheme. Blow up the complement of X to get , and let denote the inclusion morphism. The complement of X in is a Cartier div" https://en.wikipedia.org/wiki/Built-in%20self-test,"A built-in self-test (BIST) or built-in test (BIT) is a mechanism that permits a machine to test itself. Engineers design BISTs to meet requirements such as: high reliability lower repair cycle times or constraints such as: limited technician accessibility cost of testing during manufacture The main purpose of BIST is to reduce the complexity, and thereby decrease the cost and reduce reliance upon external (pattern-programmed) test equipment. BIST reduces cost in two ways: reduces test-cycle duration reduces the complexity of the test/probe setup, by reducing the number of I/O signals that must be driven/examined under tester control. Both lead to a reduction in hourly charges for automated test equipment (ATE) service. Applications BIST is commonly placed in weapons, avionics, medical devices, automotive electronics, complex machinery of all types, unattended machinery of all types, and integrated circuits. Automotive Automotive tests itself to enhance safety and reliability. For example, most vehicles with antilock brakes test them once per safety interval. If the antilock brake system has a broken wire or other fault, the brake system reverts to operating as a normal brake system. Most automotive engine controllers incorporate a ""limp mode"" for each sensor, so that the engine will continue to operate if the sensor or its wiring fails. Another, more trivial example of a limp mode is that some cars test door switches, and automatically turn lights on using seat-belt occupancy sensors if the door switches fail. Aviation Almost all avionics now incorporate BIST. In avionics, the purpose is to isolate failing line-replaceable units, which are then removed and repaired elsewhere, usually in depots or at the manufacturer. Commercial aircraft only make money when they fly, so they use BIST to minimize the time on the ground needed for repair and to increase the level of safety of the system which contains BIST. Similar arguments apply to military ai" https://en.wikipedia.org/wiki/List%20of%20numerical%20analysis%20topics,"This is a list of numerical analysis topics. General Validated numerics Iterative method Rate of convergence — the speed at which a convergent sequence approaches its limit Order of accuracy — rate at which numerical solution of differential equation converges to exact solution Series acceleration — methods to accelerate the speed of convergence of a series Aitken's delta-squared process — most useful for linearly converging sequences Minimum polynomial extrapolation — for vector sequences Richardson extrapolation Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums Van Wijngaarden transformation — for accelerating the convergence of an alternating series Abramowitz and Stegun — book containing formulas and tables of many special functions Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun Curse of dimensionality Local convergence and global convergence — whether you need a good initial guess to get convergence Superconvergence Discretization Difference quotient Complexity: Computational complexity of mathematical operations Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs Symbolic-numeric computation — combination of symbolic and numeric methods Cultural and historical aspects: History of numerical solution of differential equations using computers Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by Nick Trefethen in 2002 International Workshops on Lattice QCD and Numerical Analysis Timeline of numerical analysis after 1945 General classes of methods: Collocation method — discretizes a continuous equation by requiring it only to hold at certain points Level-set method Level set (data structures) — data structures for representing level sets Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x ABS methods Error Error analysis (mathematics) Approximat" https://en.wikipedia.org/wiki/Operability,"Operability is the ability to keep a piece of equipment, a system or a whole industrial installation in a safe and reliable functioning condition, according to pre-defined operational requirements. In a computing systems environment with multiple systems this includes the ability of products, systems and business processes to work together to accomplish a common task such as finding and returning availability of inventory for flight. For a gas turbine engine, operability addresses the installed aerodynamic operation of the engine to ensure that it operates with care-free throttle handling without compressor stall or surge or combustor flame-out. There must be no unacceptable loss of power or handling deterioration after ingesting birds, rain and hail or ingesting or accumulating ice. Design and development responsibilities include the components through which the thrust/power-producing flow passes, ie the intake, compressor, combustor, fuel system, turbine and exhaust. They also include the software in the computers which control the way the engine changes its speed in response to the actions of the pilot in selecting a start, selecting different idle settings and higher power ratings such as take-off, climb and cruise. The engine has to start to idle and accelerate and decelerate within agreed, or mandated, times while remaining within operating limits (shaft speeds, turbine temperature, combustor casing pressure) over the required aircraft operating envelope. Operability is considered one of the ilities and is closely related to reliability, supportability and maintainability. Operability also refers to whether or not a surgical operation can be performed to treat a patient with a reasonable degree of safety and chance of success." https://en.wikipedia.org/wiki/List%20of%20small%20groups,"The following list in mathematics contains the finite groups of small order up to group isomorphism. Counts For n = 1, 2, … the number of nonisomorphic groups of order n is 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, ... For labeled groups, see . Glossary Each group is named by Small Groups library as Goi, where o is the order of the group, and i is the index used to label the group within that order. Common group names: Zn: the cyclic group of order n (the notation Cn is also used; it is isomorphic to the additive group of Z/nZ) Dihn: the dihedral group of order 2n (often the notation Dn or D2n is used) K4: the Klein four-group of order 4, same as and Dih2 D2n: the dihedral group of order 2n, the same as Dihn (notation used in section List of small non-abelian groups) Sn: the symmetric group of degree n, containing the n! permutations of n elements An: the alternating group of degree n, containing the even permutations of n elements, of order 1 for , and order n!/2 otherwise Dicn or Q4n: the dicyclic group of order 4n Q8: the quaternion group of order 8, also Dic2 The notations Zn and Dihn have the advantage that point groups in three dimensions Cn and Dn do not have the same notation. There are more isometry groups than these two, of the same abstract group type. The notation denotes the direct product of the two groups; Gn denotes the direct product of a group with itself n times. G ⋊ H denotes a semidirect product where H acts on G; this may also depend on the choice of action of H on G. Abelian and simple groups are noted. (For groups of order , the simple groups are precisely the cyclic groups Zn, for prime n.) The equality sign (""="") denotes isomorphism. The identity element in the cycle graphs is represented by the black circle. The lowest order for which the cycle graph does not uniquely represent a group is order 16. In the lists of subgroups, the trivial group and the group itself are not listed. Where there are s" https://en.wikipedia.org/wiki/Journal%20of%20Mathematics%20and%20the%20Arts,"The Journal of Mathematics and the Arts is a quarterly peer-reviewed academic journal that deals with relationship between mathematics and the arts. The journal was established in 2007 and is published by Taylor & Francis. The editor-in-chief is Mara Alagic (Wichita State University, Kansas)." https://en.wikipedia.org/wiki/Integrated%20circuit%20layout%20design%20protection,"Layout designs (topographies) of integrated circuits are a field in the protection of intellectual property. In United States intellectual property law, a ""mask work"" is a two or three-dimensional layout or topography of an integrated circuit (IC or ""chip""), i.e. the arrangement on a chip of semiconductor devices such as transistors and passive electronic components such as resistors and interconnections. The layout is called a mask work because, in photolithographic processes, the multiple etched layers within actual ICs are each created using a mask, called the photomask, to permit or block the light at specific locations, sometimes for hundreds of chips on a wafer simultaneously. Because of the functional nature of the mask geometry, the designs cannot be effectively protected under copyright law (except perhaps as decorative art). Similarly, because individual lithographic mask works are not clearly protectable subject matter; they also cannot be effectively protected under patent law, although any processes implemented in the work may be patentable. So since the 1990s, national governments have been granting copyright-like exclusive rights conferring time-limited exclusivity to reproduction of a particular layout. Terms of integrated circuit rights are usually shorter than copyrights applicable on pictures. International law A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The Treaty, signed at Washington on May 26, 1989, is open to member states of the United Nations (UN) World Intellectual Property Organization (WIPO) and to intergovernmental organizations meeting certain criteria. The Treaty has been incorporated by reference into the TRIPS Agreement of the World Trade Organization (WTO), subject to the following modifications: the term of protection is at least 10 (rather than eight) years from the date of f" https://en.wikipedia.org/wiki/Elmore%20delay,"Elmore delay is a simple approximation to the delay through an RC network in an electronic system. It is often used in applications such as logic synthesis, delay calculation, static timing analysis, placement and routing, since it is simple to compute (especially in tree structured networks, which are the vast majority of signal nets within ICs) and is reasonably accurate. Even where it is not accurate, it is usually faithful, in the sense that reducing the Elmore delay will almost always reduce the true delay, so it is still useful in optimization. Elmore delay can be thought of in several ways, all mathematically identical. For tree structured networks, find the delay through each segment as the R (electrical resistance) times the downstream C (electrical capacitance). Sum the delays from the root to the sink. Assume the output is a simple exponential, and find the exponential that has the same integral as the true response. This is also equivalent to moment matching with one moment, since the first moment is a pure exponential. Find a one pole approximation to the true frequency response. This is a first-order Padé approximation. There are many extensions to Elmore delay. It can be extended to upper and lower bounds, to include inductance as well as R and C, to be more accurate (higher order approximations) and so on. See delay calculation for more details and references. See also Delay calculation Static timing analysis William Cronk Elmore" https://en.wikipedia.org/wiki/List%20of%20polynomial%20topics,"This is a list of polynomial topics, by Wikipedia page. See also trigonometric polynomial, list of algebraic geometry topics. Terminology Degree: The maximum exponents among the monomials. Factor: An expression being multiplied. Linear factor: A factor of degree one. Coefficient: An expression multiplying one of the monomials of the polynomial. Root (or zero) of a polynomial: Given a polynomial p(x), the x values that satisfy p(x) = 0 are called roots (or zeroes) of the polynomial p. Graphing End behaviour – Concavity – Orientation – Tangency point – Inflection point – Point where concavity changes. Basics Polynomial Coefficient Monomial Polynomial long division Synthetic division Polynomial factorization Rational function Partial fraction Partial fraction decomposition over R Vieta's formulas Integer-valued polynomial Algebraic equation Factor theorem Polynomial remainder theorem Elementary abstract algebra See also Theory of equations below. Polynomial ring Greatest common divisior of two polynomials Symmetric function Homogeneous polynomial Polynomial SOS (sum of squares) Theory of equations Polynomial family Quadratic function Cubic function Quartic function Quintic function Sextic function Septic function Octic function Completing the square Abel–Ruffini theorem Bring radical Binomial theorem Blossom (functional) Root of a function nth root (radical) Surd Square root Methods of computing square roots Cube root Root of unity Constructible number Complex conjugate root theorem Algebraic element Horner scheme Rational root theorem Gauss's lemma (polynomial) Irreducible polynomial Eisenstein's criterion Primitive polynomial Fundamental theorem of algebra Hurwitz polynomial Polynomial transformation Tschirnhaus transformation Galois theory Discriminant of a polynomial Resultant Elimination theory Gröbner basis Regular chain Triangular decomposition Sturm's theorem Descartes' rule of signs Carlitz–Wan conjecture Po" https://en.wikipedia.org/wiki/Census%20of%20Marine%20Life,"The Census of Marine Life was a 10-year, US $650 million scientific initiative, involving a global network of researchers in more than 80 nations, engaged to assess and explain the diversity, distribution, and abundance of life in the oceans. The world's first comprehensive Census of Marine Life — past, present, and future — was released in 2010 in London. Initially supported by funding from the Alfred P. Sloan Foundation, the project was successful in generating many times that initial investment in additional support and substantially increased the baselines of knowledge in often underexplored ocean realms, as well as engaging over 2,700 different researchers for the first time in a global collaborative community united in a common goal, and has been described as ""one of the largest scientific collaborations ever conducted"". Project history According to Jesse Ausubel, Senior Research Associate of the Program for the Human Environment of Rockefeller University and science advisor to the Alfred P. Sloan Foundation, the idea for a ""Census of Marine Life"" originated in conversations between himself and Dr. J. Frederick Grassle, an oceanographer and benthic ecology professor at Rutgers University, in 1996. Grassle had been urged to talk with Ausubel by former colleagues at the Woods Hole Oceanographic Institution and was at that time unaware that Ausubel was also a program manager at the Alfred P. Sloan Foundation, funders of a number of other large scale ""public good"" science-based projects such as the Sloan Digital Sky Survey. Ausubel was instrumental in persuading the Foundation to fund a series of ""feasibility workshops"" over the period 1997-1998 into how the project might be conducted, one result of these workshops being the broadening of the initial concept from a ""Census of the Fishes"" into a comprehensive ""Census of Marine Life"". Results from these workshops, plus associated invited contributions, formed the basis of a special issue of Oceanography magazine i" https://en.wikipedia.org/wiki/Reflected-wave%20switching,"Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI. A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line. Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account. When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus. When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflec" https://en.wikipedia.org/wiki/List%20of%20formal%20systems,"This is a list of formal systems, also known as logical calculi. Mathematical Domain relational calculus, a calculus for the relational data model Functional calculus, a way to apply various types of functions to operators Join calculus, a theoretical model for distributed programming Lambda calculus, a formulation of the theory of reflexive functions that has deep connections to computational theory Matrix calculus, a specialized notation for multivariable calculus over spaces of matrices Modal μ-calculus, a common temporal logic used by formal verification methods such as model checking Pi-calculus, a formulation of the theory of concurrent, communicating processes that was invented by Robin Milner Predicate calculus, specifies the rules of inference governing the logic of predicates Propositional calculus, specifies the rules of inference governing the logic of propositions Refinement calculus, a way of refining models of programs into efficient programs Rho calculus, introduced as a general means to uniformly integrate rewriting and lambda calculus Tuple calculus, a calculus for the relational data model, inspired the SQL language Umbral calculus, the combinatorics of certain operations on polynomials Vector calculus (also called vector analysis), comprising specialized notations for multivariable analysis of vectors in an inner-product space Other formal systems Music is a formal system too. Please have editors illuminate on this. See also Formal systems" https://en.wikipedia.org/wiki/Cline%20%28biology%29,"In biology, a cline (from the Greek κλίνειν klinein, meaning ""to lean"") is a measurable gradient in a single characteristic (or biological trait) of a species across its geographical range. First coined by Julian Huxley in 1938, the cline usually has a genetic (e.g. allele frequency, blood type), or phenotypic (e.g. body size, skin pigmentation) character. Clines can show smooth, continuous gradation in a character, or they may show more abrupt changes in the trait from one geographic region to the next. A cline refers to a spatial gradient in a specific, singular trait, rather than a collection of traits; a single population can therefore have as many clines as it has traits, at least in principle. Additionally, Huxley recognised that these multiple independent clines may not act in concordance with each other. For example, it has been observed that in Australia, birds generally become smaller the further towards the north of the country they are found. In contrast, the intensity of their plumage colouration follows a different geographical trajectory, being most vibrant where humidity is highest and becoming less vibrant further into the arid centre of the country. Because of this, clines were defined by Huxley as being an ""auxiliary taxonomic principle""; that is, clinal variation in a species is not awarded taxonomic recognition in the way subspecies or species are. While the terms ""ecotype"" and ""cline"" are sometimes used interchangeably, they do in fact differ in that ""ecotype"" refers to a population which differs from other populations in a number of characters, rather than the single character that varies amongst populations in a cline. Drivers and the evolution of clines Clines are often cited to be the result of two opposing drivers: selection and gene flow (also known as migration). Selection causes adaptation to the local environment, resulting in different genotypes or phenotypes being favoured in different environments. This diversifying force is c" https://en.wikipedia.org/wiki/Transistor%20model,"Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design. Models for device design The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device ""geometry"" to the device simulator. ""Geometry"" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing. With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close" https://en.wikipedia.org/wiki/Cut-through%20switching,"In computer networking, cut-through switching, also called cut-through forwarding is a method for packet switching systems, wherein the switch starts forwarding a frame (or packet) before the whole frame has been received, normally as soon as the destination address and outgoing interface is determined. Compared to store and forward, this technique reduces latency through the switch and relies on the destination devices for error handling. Pure cut-through switching is only possible when the speed of the outgoing interface is at least equal or higher than the incoming interface speed. Adaptive switching dynamically selects between cut-through and store and forward behaviors based on current network conditions. Cut-through switching is closely associated with wormhole switching. Use in Ethernet When cut-through switching is used in Ethernet the switch is not able to verify the integrity of an incoming frame before forwarding it. The technology was developed by Kalpana, the company that introduced the first Ethernet switch. The primary advantage of cut-through Ethernet switches, compared to store-and-forward Ethernet switches, is lower latency. Cut-through Ethernet switches can support an end-to-end network delay latency of about ten microseconds. End-to-end application latencies below 3 microseconds require specialized hardware such as InfiniBand. A cut-through switch will forward corrupted frames, whereas a store and forward switch will drop them. Fragment free is a variation on cut-through switching that partially addresses this problem by assuring that collision fragments are not forwarded. Fragment free will hold the frame until the first 64 bytes are read from the source to detect a collision before forwarding. This is only useful if there is a chance of a collision on the source port. The theory here is that frames that are damaged by collisions are often shorter than the minimum valid Ethernet frame size of 64 bytes. With a fragment-free buffer the fir" https://en.wikipedia.org/wiki/97.5th%20percentile%20point,"In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. Its ubiquity is due to the arbitrary but common convention of using confidence intervals with 95% probability in science and frequentist statistics, though other probabilities (90%, 99%, etc.) are sometimes used. This convention seems particularly common in medical statistics, but is also common in other areas of application, such as earth sciences, social sciences and business research. There is no single accepted name for this number; it is also commonly referred to as the ""standard normal deviate"", ""normal score"" or ""Z score"" for the 97.5 percentile point, the .975 point, or just its approximate value, 1.96. If X has a standard normal distribution, i.e. X ~ N(0,1), and as the normal distribution is symmetric, One notation for this number is z.975. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by History The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925: In Table 1 of the same work, he gave the more precise value 1.959964. In 1970, the value truncated to 20 decimal places was calculated to be 1.95996 39845 40054 23552... The commonly used approximate value of 1.96 is therefore accurate to better than one part in 50,000, which is more than adequate for applied work. Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. This is not recommended but is occasionally seen. " https://en.wikipedia.org/wiki/Golden%20angle,"In geometry, the golden angle is the smaller of the two angles created by sectioning the circumference of a circle according to the golden ratio; that is, into two arcs such that the ratio of the length of the smaller arc to the length of the larger arc is the same as the ratio of the length of the larger arc to the full circumference of the circle. Algebraically, let a+b be the circumference of a circle, divided into a longer arc of length a and a smaller arc of length b such that The golden angle is then the angle subtended by the smaller arc of length b. It measures approximately 137.5077640500378546463487 ...° or in radians 2.39996322972865332 ... . The name comes from the golden angle's connection to the golden ratio φ; the exact value of the golden angle is or where the equivalences follow from well-known algebraic properties of the golden ratio. As its sine and cosine are transcendental numbers, the golden angle cannot be constructed using a straightedge and compass. Derivation The golden ratio is equal to φ = a/b given the conditions above. Let ƒ be the fraction of the circumference subtended by the golden angle, or equivalently, the golden angle divided by the angular measurement of the circle. But since it follows that This is equivalent to saying that φ 2 golden angles can fit in a circle. The fraction of a circle occupied by the golden angle is therefore The golden angle g can therefore be numerically approximated in degrees as: or in radians as : Golden angle in nature The golden angle plays a significant role in the theory of phyllotaxis; for example, the golden angle is the angle separating the florets on a sunflower. Analysis of the pattern shows that it is highly sensitive to the angle separating the individual primordia, with the Fibonacci angle giving the parastichy with optimal packing density. Mathematical modelling of a plausible physical mechanism for floret development has shown the pattern arising spontaneousl" https://en.wikipedia.org/wiki/I2P,"The Invisible Internet Project (I2P) is an anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication. Anonymous connections are achieved by encrypting the user's traffic (by using end-to-end encryption), and sending it through a volunteer-run network of roughly 55,000 computers distributed around the world. Given the high number of possible paths the traffic can transit, a third party watching a full connection is unlikely. The software that implements this layer is called an ""I2P router"", and a computer running I2P is called an ""I2P node"". I2P is free and open sourced, and is published under multiple licenses. Technical design I2P has been beta software since it started in 2003 as a fork of Freenet. The software's developers emphasize that bugs are likely to occur in the beta version and that peer review has been insufficient to date. However, they believe the code is now reasonably stable and well-developed, and more exposure can help the development of I2P. The network is strictly message-based, like IP, but a library is available to allow reliable streaming communication on top of it (similar to Non-blocking IO-based TCP, although from version 0.6, a new Secure Semi-reliable UDP transport is used). All communication is end-to-end encrypted (in total, four layers of encryption are used when sending a message) through garlic routing, and even the end points (""destinations"") are cryptographic identifiers (essentially a pair of public keys), so that neither senders nor recipients of messages need to reveal their IP address to the other side or to third-party observers. Although many developers had been a part of the Invisible IRC Project (IIP) and Freenet communities, significant differences exist between their designs and concepts. IIP was an anonymous centralized IRC server. Freenet is a censorship-resistant distributed data store. I2P is an anonymous peer-to-peer distributed communicatio" https://en.wikipedia.org/wiki/TCP%20Gender%20Changer,"TCP Gender Changer is a method in computer networking for making an internal TCP/IP based network server accessible beyond its protective firewall. Mechanism It consists of two nodes, one resides on the internal the local area network where it can access the desired server, and the other node runs outside of the local area network, where the client can access it. These nodes are respectively called CC (Connect-Connect) and LL (Listen-Listen). The reason behind naming the nodes are the fact that Connect-Connect node initiates two connections one to the Listen-Listen node and one to the actual server. The Listen-Listen node, however, passively Listens on two TCP/IP ports, one to receive a connection from CC and the other one for an incoming connection from the client. The CC node, which runs inside the network will establish a control connection to the LL, and waiting for LL's signal to open a connection to the internal server. Upon receiving a client connection LL will signal the CC node to connect the server, once done CC will let LL know of the result and if successful LL will keep the client connection and thus the client and server can communicate while CC and LL both relay the data back and forth. Use cases One of the cases where it can be very useful is to connect to a desktop machine behind a firewall running VNC, which would make the desktop remotely accessible over the network and beyond the firewall. Another useful scenario would be to create a VPN using PPP over SSH, or even simply using SSH to connect to an internal Unix based server. See also Firewall (computing) LAN Network Security VPN VNC" https://en.wikipedia.org/wiki/Angle-sensitive%20pixel,"An angle-sensitive pixel (ASP) is a CMOS sensor with a sensitivity to incoming light that is sinusoidal in incident angle. Principles of operation ASPs are typically composed of two gratings (a diffraction grating and an analyzer grating) above a single photodiode. ASPs exploit the moire effect and the Talbot effect to gain their sinusoidal light sensitivity. According to the moire effect, if light acted as a particle, at certain incident angles the gaps in the diffraction and analyzer gratings line up, while at other incident angles light passed by the diffraction grating is blocked by the analyzer grating. The amount of light reaching the photodiode would be proportional to a sinusoidal function of incident angle, as the two gratings come in and out of phase with each other with shifting incident angle. The wave nature of light becomes important at small scales such as those in ASPs, meaning a pure-moire model of ASP function is insufficient. However, at half-integer multiples of the Talbot depth, the periodicity of the diffraction grating is recapitulated, and the moire effect is rescued. By building ASPs where the vertical separation between the gratings is approximately equal to a half-integer multiple of the Talbot depth, the sinusoidal sensitivity with incident angle is observed. Applications ASPs can be used in miniature imaging devices. They do not require any focusing elements to achieve sinusoidal incident angle sensitivity, meaning that they can be deployed without a lens to image the near field, or the far field using a Fourier-complete planar Fourier capture array. They can also be used in conjunction with a lens, in which case they perform a depth-sensitive, physics-based wavelet transform of the far-away scene, allowing single-lens 3D photography similar to that of the Lytro camera. See also Planar Fourier capture array" https://en.wikipedia.org/wiki/FAO%20GM%20Foods%20Platform,"The FAO GM Foods Platform is a web platform where participating countries can share information on their assessments of the safety of genetically modified (recombinant-DNA) foods and feeds based on the Codex Alimentarius. It also allows for sharing of assessments of low-level GMO contamination (LLP, low-level presence). The platform was set up by the Food and Agriculture Organization of the United Nations, and was launched at the FAO headquarters in Rome on 1 July 2013. The information uploaded to the platform is freely available to be read." https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20classical%20mechanics,"This is a list of mathematical topics in classical mechanics, by Wikipedia page. See also list of variational topics, correspondence principle. Newtonian physics Newton's laws of motion Inertia, Kinematics, rigid body Momentum, kinetic energy Parallelogram of force Circular motion Rotational speed Angular speed Angular momentum torque angular acceleration moment of inertia parallel axes rule perpendicular axes rule stretch rule centripetal force, centrifugal force, Reactive centrifugal force Laplace–Runge–Lenz vector Euler's disk elastic potential energy Mechanical equilibrium D'Alembert's principle Degrees of freedom (physics and chemistry) Frame of reference Inertial frame of reference Galilean transformation Principle of relativity Conservation laws Conservation of momentum Conservation of linear momentum Conservation of angular momentum Conservation of energy Potential energy Conservative force Conservation of mass Law of universal gravitation Projectile motion Kepler's laws of planetary motion Escape velocity Potential well Weightlessness Lagrangian point N-body problem Kolmogorov-Arnold-Moser theorem Virial theorem Gravitational binding energy Speed of gravity Newtonian limit Hill sphere Roche lobe Roche limit Hamiltonian mechanics Phase space Symplectic manifold Liouville's theorem (Hamiltonian) Poisson bracket Poisson algebra Poisson manifold Antibracket algebra Hamiltonian constraint Moment map Contact geometry Analysis of flows Nambu mechanics Lagrangian mechanics Action (physics) Lagrangian Euler–Lagrange equations Noether's theorem Classical mechanics" https://en.wikipedia.org/wiki/Mass%20versus%20weight,"In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength). In scientific contexts, mass is the amount of ""matter"" in an object (though ""matter"" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass. Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the ""weightless object"" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area. A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an" https://en.wikipedia.org/wiki/Logic%20built-in%20self-test,"Logic built-in self-test (or LBIST) is a form of built-in self-test (BIST) in which hardware and/or software is built into integrated circuits allowing them to test their own operation, as opposed to reliance on external automated test equipment. Advantages The main advantage of LBIST is the ability to test internal circuits having no direct connections to external pins, and thus unreachable by external automated test equipment. Another advantage is the ability to trigger the LBIST of an integrated circuit while running a built-in self test or power-on self test of the finished product. Disadvantages LBIST that requires additional circuitry (or read-only memory) increases the cost of the integrated circuit. LBIST that only requires temporary changes to programmable logic or rewritable memory avoids this extra cost, but requires more time to first program in the BIST and then to remove it and program in the final configuration. Another disadvantage of LBIST is the possibility that the on-chip testing hardware itself can fail; external automated test equipment tests the integrated circuit with known-good test circuitry. Related technologies Other, related technologies are MBIST (a BIST optimized for testing internal memory) and ABIST (either a BIST optimized for testing arrays or a BIST that is optimized for testing analog circuitry). The two uses may be distinguished by considering whether the integrated circuit being tested has an internal array or analog functions. See also Built-in self-test Built-in test equipment Design for test Power-on self-test External links Built-in Self Test (BIST) Hardware Diagnostic Self Tests BIST for Analog Weenies Integrated circuits" https://en.wikipedia.org/wiki/Paratype,"In zoology and botany, a paratype is a specimen of an organism that helps define what the scientific name of a species and other taxon actually represents, but it is not the holotype (and in botany is also neither an isotype nor a syntype). Often there is more than one paratype. Paratypes are usually held in museum research collections. The exact meaning of the term paratype when it is used in zoology is not the same as the meaning when it is used in botany. In both cases however, this term is used in conjunction with holotype. Zoology In zoological nomenclature, a paratype is officially defined as ""Each specimen of a type series other than the holotype."" In turn, this definition relies on the definition of a ""type series"". A type series is the material (specimens of organisms) that was cited in the original publication of the new species or subspecies, and was not excluded from being type material by the author (this exclusion can be implicit, e.g., if an author mentions ""paratypes"" and then subsequently mentions ""other material examined"", the latter are not included in the type series), nor referred to as a variant, or only dubiously included in the taxon (e.g., a statement such as ""I have before me a specimen which agrees in most respects with the remainder of the type series, though it may yet prove to be distinct"" would exclude this specimen from the type series). Thus, in a type series of five specimens, if one is the holotype, the other four will be paratypes. A paratype may originate from a different locality than the holotype. A paratype cannot become a lectotype, though it is eligible (and often desirable) for designation as a neotype. The International Code of Zoological Nomenclature (ICZN) has not always required a type specimen, but any species or subspecies newly described after the end of 1999 must have a designated holotype or syntypes. A related term is allotype, a term that indicates a specimen that exemplifies the opposite sex of the holoty" https://en.wikipedia.org/wiki/A%20calorie%20is%20a%20calorie,"""Calorie In Calorie Out"" is a tautology used to convey the thermodynamic concept that a ""calorie"" is a sufficient way to describe the energy content of food. History In 1878, German nutritionist Max Rubner crafted what he called the ""isodynamic law"". The law claims that the basis of nutrition is the exchange of energy, and was applied to the study of obesity in the early 1900s by Carl von Noorden. Von Noorden had two theories about what caused people to develop obesity. The first simply avowed Rubner's notion that ""a calorie is a calorie"". The second theorized that obesity development depends on how the body partitions calories for either use or storage. Since 1925, the calorie has been defined in terms of the joule; the current definition of the calorie was formally adopted in 1948. The related concept of ""calorie in, calorie out"" might be contested, despite having become a commonly held belief in nutritionism. Calorie counting Calorie amounts found on food labels are based on the Atwater system. The accuracy of the system is disputed, despite no real proposed alternatives. For example, a 2012 study by a USDA scientist concluded that the measured energy content of a sample of almonds was 32% lower than the estimated Atwater value. The driving mechanism behind caloric intake is absorption, which occurs largely in the small intestine and distributes nutrients to the circulatory and lymphatic capillaries by means of osmosis, diffusion and active transport. Fat, in particular is emulsified by bile produced by the liver and stored in the gallbladder where it is released to the small intestine via the bile duct. A relatively lesser amount of absorption—composed primarily of water—occurs in the large intestine. A kilocalorie is the equivalent of 1000 calories or one dietary Calorie, which contains 4184 joules of energy. The human body is a highly complex biochemical system that undergoes processes which regulate energy balance. The metabolic pathways for protein are" https://en.wikipedia.org/wiki/Energy%20efficient%20transformer,"In a typical power distribution grid, electric transformer power loss typically contributes to about 40-50% of the total transmission and distribution loss. Energy efficient transformers are therefore an important means to reduce transmission and distribution loss. With the improvement of electrical steel (silicon steel) properties, the losses of a transformer in 2010 can be half that of a similar transformer in the 1970s. With new magnetic materials, it is possible to achieve even higher efficiency. The amorphous metal transformer is a modern example." https://en.wikipedia.org/wiki/No%20free%20lunch%20theorem,"In mathematical folklore, the ""no free lunch"" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying ""no such thing as a free lunch"", that is, there are no easy shortcuts to success. It appeared in the 1997 ""No Free Lunch Theorems for Optimization"". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference). In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper ""state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems"". The ""no free lunch"" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search and optimization. While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research. Example Posit a toy universe that exists for exactly two days and on each day contains exactly one object, a square or a triangle. The universe has exactly four possible histories: (square, triangle): the universe contains a square on day 1, and a triangle on day 2 (square, square) (triangle, triangle) (triangle, square) Any prediction strategy that succeeds for history #2, by predicting a square on day 2 if there is a square on day 1, will fail on history #1, and vice versa. If all histories are equally likely, then any prediction strategy will score the same, with the same accuracy rate of 0.5. Origin Wolpert and Macready give two NFL theorems that are closely related to the" https://en.wikipedia.org/wiki/Outline%20of%20computer%20engineering,"The following outline is provided as an overview of and topical guide to computer engineering: Computer engineering – discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture. Main articles on computer engineering Computer Computer architecture Computer hardware Computer software Computer science Engineering Electrical engineering Software engineering History of computer engineering General Time line of computing 2400 BC - 1949 - 1950-1979 - 1980-1989 - 1990-1999 - 2000-2009 History of computing hardware up to third generation (1960s) History of computing hardware from 1960s to current History of computer hardware in Eastern Bloc countries History of personal computers History of laptops History of software engineering History of compiler writing History of the Internet History of the World Wide Web History of video games History of the graphical user interface Timeline of computing Timeline of operating systems Timeline of programming languages Timeline of artificial intelligence Timeline of cryptography Timeline of algorithms Timeline of quantum computing Product specific Timeline of DOS operating systems Classic Mac OS History of macOS History of Microsoft Windows Timeline of the Apple II series Timeline of Apple products Timeline of file sharing Timeline of OpenBSD Hardware Digital" https://en.wikipedia.org/wiki/List%20of%20Martin%20Gardner%20Mathematical%20Games%20columns,"Over a period of 24 years (January 1957 – December 1980), Martin Gardner wrote 288 consecutive monthly ""Mathematical Games"" columns for Scientific American magazine. During the next years, through June 1986, Gardner wrote 9 more columns, bringing his total to 297. During this period other authors wrote most of the columns. In 1981, Gardner's column alternated with a new column by Douglas Hofstadter called ""Metamagical Themas"" (an anagram of ""Mathematical Games""). The table below lists Gardner's columns. Twelve of Gardner's columns provided the cover art for that month's magazine, indicated by ""[cover]"" in the table with a hyperlink to the cover. Other articles by Gardner Gardner wrote 5 other articles for Scientific American. His flexagon article in December 1956 was in all but name the first article in the series of Mathematical Games columns and led directly to the series which began the following month. These five articles are listed below." https://en.wikipedia.org/wiki/Negative%20feedback,"Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. A classic example of negative feedback is a heating system thermostat — when the temperature gets high enough, the heater is turned OFF. When the temperature gets too cold, the heat is turned back ON. In each case the ""feedback"" generated by the thermostat ""negates"" the trend. The opposite tendency — called positive feedback — is when a trend is positively reinforced, creating amplification, such as the squealing ""feedback"" loop that can occur when a mic is brought too close to a speaker which is amplifying the very sounds the mic is picking up, or the runaway heating and ultimate meltdown of a nuclear reactor. Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing, can be very stable, accurate, and responsive. Negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. General negative feedback systems are studied in control systems engineering. Negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on Earth. One such feedback system is the interaction between solar radiation, cloud cover, and planet temperature. General description In many physical and biological systems, qualitatively different influences can oppose each other. For example, in biochemistry, one set of chemicals drives the syst" https://en.wikipedia.org/wiki/Noiselet,"Noiselets are functions which gives the worst case behavior for the Haar wavelet packet analysis. In other words, noiselets are totally incompressible by the Haar wavelet packet analysis. Like the canonical and Fourier bases, which have an incoherent property, noiselets are perfectly incoherent with the Haar basis. In addition, they have a fast algorithm for implementation, making them useful as a sampling basis for signals that are sparse in the Haar domain. Definition The mother bases function is defined as: The family of noislets is constructed recursively as follows: Property of fn is an orthogonal basis for , where is the space of all possible approximations at the resolution of functions in . For each , Matrix construction of noiselets Noiselet can be extended and discretized. The extended function is defined as follows: Use extended noiselet , we can generate the noiselet matrix , where n is a power of two : Here denotes the Kronecker product. Suppose , we can find that is equal . The elements of the noiselet matrices take discrete values from one of two four-element sets: 2D noiselet transform 2D noiselet transforms are obtained through the Kronecker product of 1D noiselet transform: Applications Noiselet has some properties that make them ideal for applications: The noiselet matrix can be derived in . Noiselet completely spread out spectrum and have the perfectly incoherent with Haar wavelets. Noiselet is conjugate symmetric and is unitary. The complementarity of wavelets and noiselets means that noiselets can be used in compressed sensing to reconstruct a signal (such as an image) which has a compact representation in wavelets. MRI data can be acquired in noiselet domain, and, subsequently, images can be reconstructed from undersampled data using compressive-sensing reconstruction." https://en.wikipedia.org/wiki/The%20Swallow%27s%20Tail,"The Swallow's Tail — Series of Catastrophes () was Salvador Dalí's last painting. It was completed in May 1983, as the final part of a series based on the mathematical catastrophe theory of René Thom. Thom suggested that in four-dimensional phenomena, there are seven possible equilibrium surfaces, and therefore seven possible discontinuities, or ""elementary catastrophes"": fold, cusp, swallowtail, butterfly, hyperbolic umbilic, elliptic umbilic, and parabolic umbilic. ""The shape of Dalí's Swallow's Tail is taken directly from Thom's four-dimensional graph of the same title, combined with a second catastrophe graph, the s-curve that Thom dubbed, 'the cusp'. Thom's model is presented alongside the elegant curves of a cello and the instrument's f-holes, which, especially as they lack the small pointed side-cuts of a traditional f-hole, equally connote the mathematical symbol for an integral in calculus: ∫."" In his 1979 speech, Gala, Velázquez and the Golden Fleece, presented upon his 1979 induction into the prestigious Académie des Beaux-Arts of the Institut de France, Dalí described Thom's theory of catastrophes as ""the most beautiful aesthetic theory in the world"". He also recollected his first and only meeting with René Thom, at which Thom purportedly told Dalí that he was studying tectonic plates; this provoked Dalí to question Thom about the railway station at Perpignan, France (near the Spanish border), which the artist had declared in the 1960s to be the center of the universe. Thom reportedly replied, ""I can assure you that Spain pivoted precisely — not in the area of — but exactly there where the Railway Station in Perpignan stands today"". Dalí was immediately enraptured by Thom's statement, influencing his painting Topological Abduction of Europe — Homage to René Thom, the lower left corner of which features an equation closely linked to the ""swallow's tail"": an illustration of the graph, and the term queue d'aronde. The seismic fracture that transver" https://en.wikipedia.org/wiki/Degeneracy%20%28mathematics%29,"In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class, and the term degeneracy is the condition of being a degenerate case. The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a ""line segment"". Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate. For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if need" https://en.wikipedia.org/wiki/Toy%20theorem,"In mathematics, a toy theorem is a simplified instance (special case) of a more general theorem, which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem. In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise. Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem. Examples A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem. Another example of toy theorem is Rolle's theorem, which is obtained from the mean value theorem by equating the function values at the endpoints. See also Corollary Fundamental theorem Lemma (mathematics) Toy model" https://en.wikipedia.org/wiki/TMS6100,"The Texas Instruments TMS6100 is a 1 or 4-bit serial mask (factory)-programmed read-only memory IC. It is a companion chip to the TMS5100, CD2802, TMS5110, (rarely) TMS5200, and (rarely) TMS5220 speech synthesizer ICs, and was mask-programmed with LPC data required for a specific product. It holds 128Kib (16KiB) of data, and is mask-programmed with a start address for said data on a 16KiB boundary. It is also mask-programmable whether the /CE line needs to be high or low to activate, and also what the two (or four) 'internal' CE bits need to be set to activate, effectively making the total addressable area 18 bits. Finally, it is mask-programmable whether the bits are read out 1-bit serially or 4 at a time. TMS6125 The TMS6125 is a smaller, 32Kib (4KiB) version of effectively the same chip, with some minor changes to the 'address load' command format to reflect its smaller size. Texas Instruments calls both of these serial roms (TMS6100 and TMS6125) ""VSM""s (Voice Synthesis Memory) on their datasheets and literature, and they will be referred to as such for the rest of this article. Both VSMs use 'local addressing', meaning the chip keeps track of its own address pointer once loaded. Hence every bit in the chip can be sequentially read out, even though internally the chip stores data in 8-bit bytes. (For the following section, CE stands for ""Chip Enable"" and is used as a way to enable one specific VSM) Commands The VSM has supports 4 basic commands, based on two input pins called 'M0' and 'M1': no operation/idle: this command tells the chip to 'do nothing' or 'continue doing what was being done before'. load address: this command parallel-loads 4 bits from the data bus. to fully load an address, this command must be executed 5 times in sequence, for a load of a 20 bit block (LSB-first 14 bit address, 4 CE bits, and two unused bits, effectively 18 address bits) into the internal address pointer. On the TMS6125 the command must be executed 4 times instead, and o" https://en.wikipedia.org/wiki/Lillian%20Rosanoff%20Lieber,"Lillian Rosanoff Lieber (July 26, 1886 in Nicolaiev, Russian Empire – July 11, 1986 in Queens, New York) was a Russian-American mathematician and popular author. She often teamed up with her illustrator husband, Hugh Gray Lieber, to produce works. Life and career Early life and education Lieber was one of four children of Abraham H. and Clara (Bercinskaya) Rosanoff. Her brothers were Denver publisher Joseph Rosenberg, psychiatrist Aaron Rosanoff, and chemist Martin André Rosanoff. Aaron and Martin changed their names to sound more Russian, less Jewish. Lieber moved to the US with her family in 1891. She received her A.B. from Barnard College in 1908, her M.A. from Columbia University in 1911, and her Ph.D. (in chemistry) from Clark University in 1914, under Martin's direction; at Clark, Solomon Lefschetz was a classmate. She married Hugh Gray Lieber on October 27, 1926. Career After teaching at Hunter College from 1908 to 1910, and in the New York City high school system (1910–1912, 1914–1915), she became a Research Fellow at Bryn Mawr College from 1915 to 1917; she then went on to teach at Wells College from 1917 to 1918 as Instructor of Physics (also acting as head of the physics department), and at the Connecticut College for Women (1918 to 1920). She joined the mathematics department at Long Island University (LIU) in Brooklyn, New York (LIU Brooklyn) in 1934, became department chair in 1945 (taking over from Hugh when he became Professor, and Chair, of Art at LIU ), and was made a full professor in 1947, until her retirement in 1954; she was appointed director of LIU's Galois Institute of Mathematics (later the Galois Institute of Mathematics and Art) (named for Évariste Galois) in 1934. Over her career she published some 17 books, which were written in a unique, free-verse style and illustrated with whimsical line drawings by her husband. Her highly accessible writings were praised by no less than Albert Einstein, Cassius Jackson Keyser, Eric Temple Bell, " https://en.wikipedia.org/wiki/Shadow%20stack,"In computer security, a shadow stack is a mechanism for protecting a procedure's stored return address, such as from a stack buffer overflow. The shadow stack itself is a second, separate stack that ""shadows"" the program call stack. In the function prologue, a function stores its return address to both the call stack and the shadow stack. In the function epilogue, a function loads the return address from both the call stack and the shadow stack, and then compares them. If the two records of the return address differ, then an attack is detected; the typical course of action is simply to terminate the program or alert system administrators about a possible intrusion attempt. A shadow stack is similar to stack canaries in that both mechanisms aim to maintain the control-flow integrity of the protected program by detecting attacks that tamper the stored return address by an attacker during an exploitation attempt. Shadow stacks can be implemented by recompiling programs with modified prologues and epilogues, by dynamic binary rewriting techniques to achieve the same effect, or with hardware support. Unlike the call stack, which also stores local program variables, passed arguments, spilled registers and other data, the shadow stack typically just stores a second copy of a function's return address. Shadow stacks provide more protection for return addresses than stack canaries, which rely on the secrecy of the canary value and are vulnerable to non-contiguous write attacks. Shadow stacks themselves can be protected with guard pages or with information hiding, such that an attacker would also need to locate the shadow stack to overwrite a return address stored there. Like stack canaries, shadow stacks do not protect stack data other than return addresses, and so offer incomplete protection against security vulnerabilities that result from memory safety errors. In 2016, Intel announced upcoming hardware support for shadow stacks with their Control-flow Enforcement Tech" https://en.wikipedia.org/wiki/Fault%20coverage,"Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it. In electronics for example, stuck-at fault coverage is measured by sticking each pin of the hardware model at logic '0' and logic '1', respectively, and running the test vectors. If at least one of the outputs differs from what is to be expected, the fault is said to be detected. Conceptually, the total number of simulation runs is twice the number of pins (since each pin is stuck in one of two ways, and both faults should be detected). However, there are many optimizations that can reduce the needed computation. In particular, often many non-interacting faults can be simulated in one run, and each simulation can be terminated as soon as a fault is detected. A fault coverage test passes when at least a specified percentage of all possible faults can be detected. If it does not pass, at least three options are possible. First, the designer can augment or otherwise improve the vector set, perhaps by using a more effective automatic test pattern generation tool. Second, the circuit may be re-defined for better fault detectibility (improved controllability and observability). Third, the designer may simply accept the lower coverage. Test coverage (computing) The term test coverage used in the context of programming / software engineering, refers to measuring how much a software program has been exercised by tests. Coverage is a means of determining the rigour with which the question underlying the test has been answered. There are many kinds of test coverage: code coverage feature coverage, scenario coverage, screen item coverage, requirements coverage, model coverage. Each of these coverage types assumes that some kind of baseline exists which defin" https://en.wikipedia.org/wiki/Glossary%20of%20Principia%20Mathematica,"This is a list of the notation used in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913). The second (but not the first) edition of Volume I has a list of notation used at the end. Glossary This is a glossary of some of the technical terms in Principia Mathematica that are no longer widely used or whose meaning has changed. Symbols introduced in Principia Mathematica, Volume I Symbols introduced in Principia Mathematica, Volume II Symbols introduced in Principia Mathematica, Volume III See also Glossary of set theory Notes" https://en.wikipedia.org/wiki/Orbifold%20notation,"In geometry, orbifold notation (or orbifold signature) is a system, invented by the mathematician William Thurston and promoted by John Conway, for representing types of symmetry groups in two-dimensional spaces of constant curvature. The advantage of the notation is that it describes these groups in a way which indicates many of the groups' properties: in particular, it follows William Thurston in describing the orbifold obtained by taking the quotient of Euclidean space by the group under consideration. Groups representable in this notation include the point groups on the sphere (), the frieze groups and wallpaper groups of the Euclidean plane (), and their analogues on the hyperbolic plane (). Definition of the notation The following types of Euclidean transformation can occur in a group described by orbifold notation: reflection through a line (or plane) translation by a vector rotation of finite order around a point infinite rotation around a line in 3-space glide-reflection, i.e. reflection followed by translation. All translations which occur are assumed to form a discrete subgroup of the group symmetries being described. Each group is denoted in orbifold notation by a finite string made up from the following symbols: positive integers the infinity symbol, the asterisk, * the symbol o (a solid circle in older documents), which is called a wonder and also a handle because it topologically represents a torus (1-handle) closed surface. Patterns repeat by two translation. the symbol (an open circle in older documents), which is called a miracle and represents a topological crosscap where a pattern repeats as a mirror image without crossing a mirror line. A string written in boldface represents a group of symmetries of Euclidean 3-space. A string not written in boldface represents a group of symmetries of the Euclidean plane, which is assumed to contain two independent translations. Each symbol corresponds to a distinct transformation: an" https://en.wikipedia.org/wiki/Resilient%20control%20systems,"In our modern society, computerized or digital control systems have been used to reliably automate many of the industrial operations that we take for granted, from the power plant to the automobiles we drive. However, the complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology, computer science, and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations. In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations. Introduction Originally intended to provide a more efficient mechanism for controlling industrial operations, the development of digital control systems allowed for flexibility in integrating distributed sensors and operating logic while maintaining a centralized interface for human monitoring and interaction. This ease of readily adding sensors and logic through software, which was once done with relays and isolated analog instruments, has led to wide acceptance and integration of these systems in all industries. Ho" https://en.wikipedia.org/wiki/Photonic%20integrated%20circuit,"A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components which form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits utilize photons (or particles of light) as opposed to electrons that are utilized by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near infrared (850–1650 nm). The most commercially utilized material platform for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections – a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip. Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology and the University of Twente in the Netherlands. A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source. History Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and t" https://en.wikipedia.org/wiki/List%20of%20integrated%20circuit%20manufacturers,"The following is an incomplete list of notable integrated circuit (i.e. microchip) manufacturers. Some are in business, others are defunct and some are Fabless. 0–9 3dfx Interactive (acquired by Nvidia in 2002) A Achronix Actions Semiconductor Adapteva Agere Systems (now part of LSI Logic formerly part of Lucent, which was formerly part of AT&T) Agilent Technologies (formerly part of Hewlett-Packard, spun off in 1999) Airgo Networks (acquired by Qualcomm in 2006) Alcatel Alchip Altera Allegro Microsystems Allwinner Technology Alphamosaic (acquired by Broadcom in 2004) AMD (Advanced Micro Devices; founded by ex-Fairchild employees) Analog Devices Apple Inc. Applied Materials Applied Micro Circuits Corporation (AMCC) ARM Asahi Kasei Microdevices (AKM) AT&T Atari Atheros (acquired by Qualcomm in 2011) ATI Technologies (Array Technologies Incorporated; acquired parts of Tseng Labs in 1997; in 2006, became a wholly owned subsidiary of AMD) Atmel (co-founded by ex-Intel employee, now part of Microchip Technology) Amkor Technology ams AG (formerly known as austriamicrosystems AG and frequently still known as AMS (Austria Mikro Systeme)) B Bourns, Inc. Brite Semiconductor Broadcom Corporation (acquired by Avago Technologies in 2016) Broadcom Inc. (formerly Avago Technologies) BroadLight Burr-Brown Corporation (Acquired by Texas Instruments in 2000) C C-Cube Microsystems Calxeda (re-emerged with Silver Lining Systems in 2014) Cavium CEITEC Chips and Technologies (acquired by Intel in 1997) CISC Semiconductor Cirrus Logic Corsair Club 3D (Formerly Colour Power) Commodore Semiconductor Group (formerly MOS Technologies) Conexant (formerly Rockwell Semiconductor, acquired by Synaptics in 2017) Crocus Technology CSR plc (formerly Cambridge Silicon Radio) Cypress Semiconductor Now operating as subsidiary of Infineon Technologies.It was (acquired by Infineon Technologies in 2019). D D-Wave Systems Dallas Semiconductor (acquired by Maxim Integrated in 2001) Dynex Semiconductor " https://en.wikipedia.org/wiki/Anatomy,"Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine. Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures. The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells. The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging. Etymology and definition Derived from the Greek anatomē ""dissection"" (from anatémnō ""I cut up, cut open"" from ἀνά aná ""up"", and τέμνω té" https://en.wikipedia.org/wiki/Switch%20virtual%20interface,"A switch virtual interface (SVI) represents a logical layer-3 interface on a switch. VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as inter-VLAN routing. On layer-3 switches it is accomplished by the creation of layer-3 interfaces (SVIs). Inter VLAN routing, in other words routing between VLANs, can be achieved using SVIs. SVI or VLAN interface, is a virtual routed interface that connects a VLAN on the device to the Layer 3 router engine on the same device. Only one VLAN interface can be associated with a VLAN, but you need to configure a VLAN interface for a VLAN only when you want to route between VLANs or to provide IP host connectivity to the device through a virtual routing and forwarding (VRF) instance that is not the management VRF. When you enable VLAN interface creation, a switch creates a VLAN interface for the default VLAN (VLAN 1) to permit remote switch administration. SVIs are generally configured for a VLAN for the following reasons: Allow traffic to be routed between VLANs by providing a default gateway for the VLAN. Provide fallback bridging (if required for non-routable protocols). Provide Layer 3 IP connectivity to the switch. Support bridging configurations and routing protocol. Access Layer - 'Routed Access' Configuration (in lieu of Spanning Tree) SVIs advantages include: Much faster than router-on-a-stick, because everything is hardware-switched and routed. No need for external links from the switch to the router for routing. Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more bandwidth. Latency is much lower, because it does not need to leave the switch An SVI can also be known as a Routed VLAN Interface (RVI) by some vendors." https://en.wikipedia.org/wiki/Grading%20%28tumors%29,"In pathology, grading is a measure of the cell appearance in tumors and other neoplasms. Some pathology grading systems apply only to malignant neoplasms (cancer); others apply also to benign neoplasms. The neoplastic grading is a measure of cell anaplasia (reversion of differentiation) in the sampled tumor and is based on the resemblance of the tumor to the tissue of origin. Grading in cancer is distinguished from staging, which is a measure of the extent to which the cancer has spread. Pathology grading systems classify the microscopic cell appearance abnormality and deviations in their rate of growth with the goal of predicting developments at tissue level (see also the 4 major histological changes in dysplasia). Cancer is a disorder of cell life cycle alteration that leads (non-trivially) to excessive cell proliferation rates, typically longer cell lifespans and poor differentiation. The grade score (numerical: G1 up to G4) increases with the lack of cellular differentiation - it reflects how much the tumor cells differ from the cells of the normal tissue they have originated from (see 'Categories' below). Tumors may be graded on four-tier, three-tier, or two-tier scales, depending on the institution and the tumor type. The histologic tumor grade score along with the metastatic (whole-body-level cancer-spread) staging are used to evaluate each specific cancer patient, develop their individual treatment strategy and to predict their prognosis. A cancer that is very poorly differentiated is called anaplastic. Categories Grading systems are also different for many common types of cancer, though following a similar pattern with grades being increasingly malignant over a range of 1 to 4. If no specific system is used, the following general grades are most commonly used, and recommended by the American Joint Commission on Cancer and other bodies: GX Grade cannot be assessed G1 Well differentiated (Low grade) G2 Mode" https://en.wikipedia.org/wiki/Charge%20controller,"A charge controller, charge regulator or battery regulator limits the rate at which electric current is added to or drawn from electric batteries to protect against electrical overload, overcharging, and may protect against overvoltage. This prevents conditions that reduce battery performance or lifespan and may pose a safety risk. It may also prevent completely draining (""deep discharging"") a battery, or perform controlled discharges, depending on the battery technology, to protect battery life. The terms ""charge controller"" or ""charge regulator"" may refer to either a stand-alone device, or to control circuitry integrated within a battery pack, battery-powered device, and/or battery charger. Stand-alone charge controllers Charge controllers are sold to consumers as separate devices, often in conjunction with solar or wind power generators, for uses such as RV, boat, and off-the-grid home battery storage systems. In solar applications, charge controllers may also be called solar regulators or solar charge controllers. Some charge controllers / solar regulators have additional features, such as a low voltage disconnect (LVD), a separate circuit which powers down the load when the batteries become overly discharged (some battery chemistries are such that over-discharge can ruin the battery). A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or ""shunt"" load, such as an electric water heater, when batteries are full. Simple charge controllers stop charging a battery when they exceed a set high voltage level, and re-enable charging when battery voltage drops back below that level. Pulse-width modulation (PWM) and maximum power point tracker (MPPT) technologies are more electronically sophisticated, adjusting charging rates depending on the battery's level, to allow charging closer to its maximum capacity. A charge con" https://en.wikipedia.org/wiki/Galactic%20algorithm,"A galactic algorithm is one that outperforms other algorithms for problems that are sufficiently large, but where ""sufficiently large"" is so big that the algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth. Possible use cases Even if they are never used in practice, galactic algorithms may still contribute to computer science: An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms. Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical. An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms. As Lipton states: Similarly, a hypothetical large but polynomial algorithm for the Boolean satisfiability problem, although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems. Examples Integer multiplication An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: ""we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."" Matrix multiplication The first improvement over brute-force matrix multiplication (which needs multiplications) was the Strassen algorithm: a recursive algorithm that needs multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppers" https://en.wikipedia.org/wiki/Electronic%20hardware,"Electronic hardware consists of interconnected electronic components which perform analog or logic operations on received and locally stored information to produce as output or store resulting new information or to provide control for output actuator mechanisms. Electronic hardware can range from individual chips/circuits to distributed information processing systems. Well designed electronic hardware is composed of hierarchies of functional modules which inter-communicate via precisely defined interfaces. Hardware logic is primarily a differentiation of the data processing circuitry from other more generalized circuitry. For example nearly all computers include a power supply which consists of circuitry not involved in data processing but rather powering the data processing circuits. Similarly, a computer may output information to a computer monitor or audio amplifier which is also not involved in the computational processes. See also Digital electronics" https://en.wikipedia.org/wiki/Circuit%20underutilization,"Circuit underutilization also chip underutilization, programmable circuit underutilization, gate underutilization, logic block underutilization refers to a physical incomplete utility of semiconductor grade silicon on a standardized mass-produced circuit programmable chip, such as a gate array type ASIC, an FPGA, or a CPLD. Gate array In the example of a gate array, which may come in sizes of 5,000 or 10,000 gates, a design which utilizes even 5,001 gates would be required to use a 10,000 gate chip. This inefficiency results in underutilization of the silicon. FPGA Due to the design components of field-programmable gate array into logic blocks, simple designs that underutilize a single block suffer from gate underutilization, as do designs that overflow onto multiple blocks, such as designs that use wide gates. Additionally, the very generic architecture of FPGAs lends to high inefficiency; multiplexers occupy silicon real estate for programmable selection, and an abundance of flip-flops to reduce setup and hold times, even if the design does not require them, resulting in 40 times less density than of standard cell ASICs. See also Circuit minimization Don't-care condition" https://en.wikipedia.org/wiki/Mass%20action%20law%20%28electronics%29,"In electronics and semiconductor physics, the law of mass action relates the concentrations of free electrons and electron holes under thermal equilibrium. It states that, under thermal equilibrium, the product of the free electron concentration and the free hole concentration is equal to a constant square of intrinsic carrier concentration . The intrinsic carrier concentration is a function of temperature. The equation for the mass action law for semiconductors is: Carrier concentrations In semiconductors, free electrons and holes are the carriers that provide conduction. For cases where the number of carriers are much less than the number of band states, the carrier concentrations can be approximated by using Boltzmann statistics, giving the results below. Electron concentration The free-electron concentration n can be approximated by where Ec is the energy of the conduction band, EF is the energy of the Fermi level, kB is the Boltzmann constant, T is the absolute temperature in kelvins, Nc is the effective density of states at the conduction band edge given by , with m*e being the electron effective mass and h being Planck's constant. Hole concentration The free-hole concentration p is given by a similar formula where EF is the energy of the Fermi level, Ev is the energy of the valence band, kB is the Boltzmann constant, T is the absolute temperature in kelvins, Nv is the effective density of states at the valence band edge given by , with m*h being the hole effective mass and h Planck's constant. Mass action law Using the carrier concentration equations given above, the mass action law can be stated as where Eg is the band gap energy given by Eg = Ec − Ev. The above equation holds true even for lightly doped extrinsic semiconductors as the product is independent of doping concentration. See also Law of mass action" https://en.wikipedia.org/wiki/CoRR%20hypothesis,"The CoRR hypothesis states that the location of genetic information in cytoplasmic organelles permits regulation of its expression by the reduction-oxidation (""redox"") state of its gene products. CoRR is short for ""co-location for redox regulation"", itself a shortened form of ""co-location (of gene and gene product) for (evolutionary) continuity of redox regulation of gene expression"". CoRR was put forward explicitly in 1993 in a paper in the Journal of Theoretical Biology with the title ""Control of gene expression by redox potential and the requirement for chloroplast and mitochondrial genomes"". The central concept had been outlined in a review of 1992. The term CoRR was introduced in 2003 in a paper in Philosophical Transactions of the Royal Society entitled ""The function of genomes in bioenergetic organelles"". The problem Chloroplasts and mitochondria Chloroplasts and mitochondria are energy-converting organelles in the cytoplasm of eukaryotic cells. Chloroplasts in plant cells perform photosynthesis; the capture and conversion of the energy of sunlight. Mitochondria in both plant and animal cells perform respiration; the release of this stored energy when work is done. In addition to these key reactions of bioenergetics, chloroplasts and mitochondria each contain specialized and discrete genetic systems. These genetic systems enable chloroplasts and mitochondria to make some of their own proteins. Both the genetic and energy-converting systems of chloroplasts and mitochondria are descended, with little modification, from those of the free-living bacteria that these organelles once were. The existence of these cytoplasmic genomes is consistent with, and counts as evidence for, the endosymbiont hypothesis. Most genes for proteins of chloroplasts and mitochondria are, however, now located on chromosomes in the nuclei of eukaryotic cells. There they code for protein precursors that are made in the cytosol for subsequent import into the organelles. Why" https://en.wikipedia.org/wiki/Link%20flap,"Link flap is a condition where a communications link alternates between up and down states. Link flap can be caused by end station reboots, power-saving features, incorrect duplex configuration or marginal connections and signal integrity issues on the link." https://en.wikipedia.org/wiki/Mail-sink,"Smtp-sink is a utility program in the Postfix Mail software package that implements a ""black hole"" function. It listens on the named host (or address) and port. It accepts Simple Mail Transfer Protocol (SMTP) messages from the network and discards them. The purpose is to support measurement of client performance. It is not SMTP protocol compliant. Connections can be accepted on IPv4 or IPv6 endpoints, or on UNIX-domain sockets. IPv4 and IPv6 are the default. This program is the complement of the smtp-source(1) program. See also Tarpit (networking) SMTP" https://en.wikipedia.org/wiki/Missing%20heritability%20problem,"The missing heritability problem is the fact that single genetic variations cannot account for much of the heritability of diseases, behaviors, and other phenotypes. This is a problem that has significant implications for medicine, since a person's susceptibility to disease may depend more on the combined effect of all the genes in the background than on the disease genes in the foreground, or the role of genes may have been severely overestimated. Discovery The missing heritability problem was named as such in 2008 (after the ""missing baryon problem"" in physics). The Human Genome Project led to optimistic forecasts that the large genetic contributions to many traits and diseases (which were identified by quantitative genetics and behavioral genetics in particular) would soon be mapped and pinned down to specific genes and their genetic variants by methods such as candidate-gene studies which used small samples with limited genetic sequencing to focus on specific genes believed to be involved, examining single-nucleotide polymorphisms (SNPs). While many hits were found, they often failed to replicate in other studies. The exponential fall in genome sequencing costs led to the use of genome-wide association studies (GWASes) which could simultaneously examine all candidate-genes in larger samples than the original finding, where the candidate-gene hits were found to almost always be false positives and only 2-6% replicate; in the specific case of intelligence candidate-gene hits, only 1 candidate-gene hit replicated, the top 25 schizophrenia candidate-genes were no more associated with schizophrenia than chance, and of 15 neuroimaging hits, none did. The editorial board of Behavior Genetics noted, in setting more stringent requirements for candidate-gene publications, that ""the literature on candidate gene associations is full of reports that have not stood up to rigorous replication...it now seems likely that many of the published findings of the last decade are " https://en.wikipedia.org/wiki/Whisker%20%28metallurgy%29,"Metal whiskering is a phenomenon that occurs in electrical devices when metals form long whisker-like projections over time. Tin whiskers were noticed and documented in the vacuum tube era of electronics early in the 20th century in equipment that used pure, or almost pure, tin solder in their production. It was noticed that small metal hairs or tendrils grew between metal solder pads, causing short circuits. Metal whiskers form in the presence of compressive stress. Germanium, zinc, cadmium, and even lead whiskers have been documented. Many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), the addition of elements like copper and nickel, and the inclusion of conformal coatings. Traditionally, lead has been added to slow down whisker growth in tin-based solders. Following the Restriction of Hazardous Substances Directive (RoHS), the European Union banned the use of lead in most consumer electronic products from 2006 due to health problems associated with lead and the ""high-tech trash"" problem, leading to a re-focusing on the issue of whisker formation in lead-free solders. Mechanism Metal whiskering is a crystalline metallurgical phenomenon involving the spontaneous growth of tiny, filiform hairs from a metallic surface. The effect is primarily seen on elemental metals but also occurs with alloys. The mechanism behind metal whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including: energy gained due to electrostatic polarization of metal filaments in the electric field, residual stresses caused by electroplating, mechanically induced stresses, stresses induced by diffusion of different metals, thermally induced stresses, and strain gradients in materials. Metal whiskers differ from metallic dendrites in several respects: dendrites are fern-shaped and grow across the surface of the metal, while metal whiskers are hair-like and project normal to" https://en.wikipedia.org/wiki/JANOG,"JANOG is the Internet network operators' group for the Japanese Internet service provider (ISP) community. JANOG was originally established in 1997. JANOG holds regular meetings for the ISP community, with hundreds of attendees. Although JANOG has no formal budget of its own, it draws on the resources of its member companies to do so." https://en.wikipedia.org/wiki/Hafner%E2%80%93Sarnak%E2%80%93McCurley%20constant,"The Hafner–Sarnak–McCurley constant is a mathematical constant representing the probability that the determinants of two randomly chosen square integer matrices will be relatively prime. The probability depends on the matrix size, n, in accordance with the formula where pk is the kth prime number. The constant is the limit of this expression as n approaches infinity. Its value is roughly 0.3532363719... ." https://en.wikipedia.org/wiki/Transport%20triggered%20architecture,"In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators. Typically a transport triggered processor has multiple transport buses and multiple functional units connected to the buses, which provides opportunities for instruction level parallelism. The parallelism is statically defined by the programmer. In this respect (and obviously due to the large instruction word width), the TTA architecture resembles the very long instruction word (VLIW) architecture. A TTA instruction word is composed of multiple slots, one slot per bus, and each slot determines the data transport that takes place on the corresponding bus. The fine-grained control allows some optimizations that are not possible in a conventional processor. For example, software can transfer data directly between functional units without using registers. Transport triggering exposes some microarchitectural details that are normally hidden from programmers. This greatly simplifies the control logic of a processor, because many decisions normally done at run time are fixed at compile time. However, it also means that a binary compiled for one TTA processor will not run on another one without recompilation if there is even a small difference in the architecture between the two. The binary incompatibility problem, in addition to the complexity of implementing a full context switch, makes TTAs more suitable for embedded systems than for general purpos" https://en.wikipedia.org/wiki/IC%20programming,"IC programming is the process of transferring a computer program into an integrated computer circuit. Older types of IC including PROMs and EPROMs and some early programmable logic was typically programmed through parallel busses that used many of the device's pins and basically required inserting the device in a separate programmer. Modern ICs are typically programmed in circuit though a serial protocol (sometimes JTAG sometimes something manufacturer specific). Some (particularly FPGAs) even load the data serially from a separate flash or prom chip on every startup. Notes Embedded systems" https://en.wikipedia.org/wiki/Mathematical%20sculpture,"A mathematical sculpture is a sculpture which uses mathematics as an essential conception. Helaman Ferguson, George W. Hart, Bathsheba Grossman, Peter Forakis and Jacobus Verhoeff are well-known mathematical sculptors." https://en.wikipedia.org/wiki/Recurrence%20quantification%20analysis,"Recurrence quantification analysis (RQA) is a method of nonlinear data analysis (cf. chaos theory) for the investigation of dynamical systems. It quantifies the number and duration of recurrences of a dynamical system presented by its phase space trajectory. Background The recurrence quantification analysis (RQA) was developed in order to quantify differently appearing recurrence plots (RPs), based on the small-scale structures therein. Recurrence plots are tools which visualise the recurrence behaviour of the phase space trajectory of dynamical systems: , where is the Heaviside function and a predefined tolerance. Recurrence plots mostly contain single dots and lines which are parallel to the mean diagonal (line of identity, LOI) or which are vertical/horizontal. Lines parallel to the LOI are referred to as diagonal lines and the vertical structures as vertical lines. Because an RP is usually symmetric, horizontal and vertical lines correspond to each other, and, hence, only vertical lines are considered. The lines correspond to a typical behaviour of the phase space trajectory: whereas the diagonal lines represent such segments of the phase space trajectory which run parallel for some time, the vertical lines represent segments which remain in the same phase space region for some time. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series, the embedding dimension and the time delay. The RQA quantifies the small-scale structures of recurrence plots, which present the number and duration of the recurrences of a dynamical system. The measures introduced for the RQA were developed heuristically between 1992 and 2002 (Zbilut & Webber 1992; Webber & Zbilut 1994; Marwan et al. 2002). They are actually measures of complexity. The main advantage of the recurrence quantification analysis is that it can provide useful information even for short and non-stationary d" https://en.wikipedia.org/wiki/Geophysics,"Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets. Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics. Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to " https://en.wikipedia.org/wiki/Ridley%E2%80%93Watkins%E2%80%93Hilsum%20theory,"In solid state physics the Ridley–Watkins–Hilsum theory (RWH) explains the mechanism by which differential negative resistance is developed in a bulk solid state semiconductor material when a voltage is applied to the terminals of the sample. It is the theory behind the operation of the Gunn diode as well as several other microwave semiconductor devices, which are used practically in electronic oscillators to produce microwave power. It is named for British physicists Brian Ridley, Tom Watkins and Cyril Hilsum who wrote theoretical papers on the effect in 1961. Negative resistance oscillations in bulk semiconductors had been observed in the laboratory by J. B. Gunn in 1962, and were thus named the ""Gunn effect"", but physicist Herbert Kroemer pointed out in 1964 that Gunn's observations could be explained by the RWH theory. In essence, RWH mechanism is the transfer of conduction electrons in a semiconductor from a high mobility valley to lower-mobility, higher-energy satellite valleys. This phenomenon can only be observed in materials that have such energy band structures. Normally, in a conductor, increasing electric field causes higher charge carrier (usually electron) speeds and results in higher current consistent with Ohm's law. In a multi-valley semiconductor, though, higher energy may push the carriers into a higher energy state where they actually have higher effective mass and thus slow down. In effect, carrier velocities and current drop as the voltage is increased. While this transfer occurs, the material exhibits a decrease in current – that is, a negative differential resistance. At higher voltages, the normal increase of current with voltage relation resumes once the bulk of the carriers are kicked into the higher energy-mass valley. Therefore the negative resistance only occurs over a limited range of voltages. Of the type of semiconducting materials satisfying these conditions, gallium arsenide (GaAs) is the most widely understood and used. Ho" https://en.wikipedia.org/wiki/Copper%20interconnects,"In semiconductor technology, copper interconnects are interconnects made of copper. They are used in silicon integrated circuits (ICs) to reduce propagation delays and power consumption. Since copper is a better conductor than aluminium, ICs using copper for their interconnects can have interconnects with narrower dimensions, and use less energy to pass electricity through them. Together, these effects lead to ICs with better performance. They were first introduced by IBM, with assistance from Motorola, in 1997. The transition from aluminium to copper required significant developments in fabrication techniques, including radically different methods for patterning the metal as well as the introduction of barrier metal layers to isolate the silicon from potentially damaging copper atoms. Although the methods of superconformal copper electrodepostion were known since late 1960, their application at the (sub)micron via scale (e.g. in microchips) started only in 1988-1995 (see figure). By year 2002 it became a mature technology, and research and development efforts in this field started to decline. Patterning Although some form of volatile copper compound has been known to exist since 1947, with more discovered as the century progressed, none were in industrial use, so copper could not be patterned by the previous techniques of photoresist masking and plasma etching that had been used with great success with aluminium. The inability to plasma etch copper called for a drastic rethinking of the metal patterning process and the result of this rethinking was a process referred to as an additive patterning, also known as a ""Damascene"" or ""dual-Damascene"" process by analogy to a traditional technique of metal inlaying. In this process, the underlying silicon oxide insulating layer is patterned with open trenches where the conductor should be. A thick coating of copper that significantly overfills the trenches is deposited on the insulator, and chemical-mechanical planar" https://en.wikipedia.org/wiki/Artificially%20Expanded%20Genetic%20Information%20System,"Artificially Expanded Genetic Information System (AEGIS) is a synthetic DNA analog experiment that uses some unnatural base pairs from the laboratories of the Foundation for Applied Molecular Evolution in Gainesville, Florida. AEGIS is a NASA-funded project to try to understand how extraterrestrial life may have developed. The system uses twelve different nucleobases in its genetic code. These include the four canonical nucleobases found in DNA (adenine, cytosine, guanine and thymine) plus eight synthetic nucleobases). AEGIS includes S:B, Z:P, V:J and K:X base pairs. See also Abiogenesis Astrobiology Hachimoji DNA xDNA Hypothetical types of biochemistry Xeno nucleic acid" https://en.wikipedia.org/wiki/OMNeT%2B%2B,"OMNeT++ (Objective Modular Network Testbed in C++) is a modular, component-based C++ simulation library and framework, primarily for building network simulators. OMNeT++ can be used for free for non-commercial simulations like at academic institutions and for teaching. OMNEST is an extended version of OMNeT++ for commercial use. OMNeT++ itself is a simulation framework without models for network protocols like IP or HTTP. The main computer network simulation models are available in several external frameworks. The most commonly used one is INET which offers a variety of models for all kind of network protocols and technologies like for IPv6, BGP. INET also offers a set of mobility models to simulate the node movement in simulations. The INET models are licensed under the LGPL or GPL. NED (NEtwork Description) is the topology description language of OMNeT++. To manage and reduce the time to carry out large-scale simulations, additional tools have been developed, for example, based on Python. See also MLDesigner QualNet NEST (software)" https://en.wikipedia.org/wiki/List%20of%20permutation%20topics,"This is a list of topics on mathematical permutations. Particular kinds of permutations Alternating permutation Circular shift Cyclic permutation Derangement Even and odd permutations—see Parity of a permutation Josephus permutation Parity of a permutation Separable permutation Stirling permutation Superpattern Transposition (mathematics) Unpredictable permutation Combinatorics of permutations Bijection Combination Costas array Cycle index Cycle notation Cycles and fixed points Cyclic order Direct sum of permutations Enumerations of specific permutation classes Factorial Falling factorial Permutation matrix Generalized permutation matrix Inversion (discrete mathematics) Major index Ménage problem Permutation graph Permutation pattern Permutation polynomial Permutohedron Rencontres numbers Robinson–Schensted correspondence Sum of permutations: Direct sum of permutations Skew sum of permutations Stanley–Wilf conjecture Symmetric function Szymanski's conjecture Twelvefold way Permutation groups and other algebraic structures Groups Alternating group Automorphisms of the symmetric and alternating groups Block (permutation group theory) Cayley's theorem Cycle index Frobenius group Galois group of a polynomial Jucys–Murphy element Landau's function Oligomorphic group O'Nan–Scott theorem Parker vector Permutation group Place-permutation action Primitive permutation group Rank 3 permutation group Representation theory of the symmetric group Schreier vector Strong generating set Symmetric group Symmetric inverse semigroup Weak order of permutations Wreath product Young symmetrizer Zassenhaus group Zolotarev's lemma Other algebraic structures Burnside ring Mathematical analysis Conditionally convergent series Riemann series theorem Lévy–Steinitz theorem Mathematics applicable to physical sciences Antisymmetrizer Identical particles Levi-Civita symbol Number theory Permutable prime Algorithms and information processing Bit-reversal permutation Claw-" https://en.wikipedia.org/wiki/ULN2003A,"The ULN2003A is an integrated circuit produced by Texas Instruments. It consists of an array of seven NPN Darlington transistors capable of 500 mA, 50 V output. It features common-cathode flyback diodes for switching inductive loads (such as servomotors). It can come in PDIP, SOIC, SOP or TSSOP packaging. In the same family are ULN2002A, ULN2004A, as well as ULQ2003A and ULQ2004A, designed for different logic input levels. The ULN2003A is also similar to the ULN2001A (4 inputs) and the ULN2801A, ULN2802A, ULN2803A, ULN2804A and ULN2805A, only differing in logic input levels (TTL, CMOS, PMOS) and number of in/outputs (4/7/8). Darlington Transistor A Darlington transistor (also known as Darlington pair) achieves very high current amplification by connecting two bipolar transistors in direct DC coupling so the current amplified by the first transistor is amplified further by the second one. The resultant current gain is the product of those of the two component transistors: The seven Darlington pairs in ULN2003 can operate independently except the common cathode diodes that connect to their respective collectors. Features The ULN2003 is known for its high-current, high-voltage capacity. The drivers can be paralleled for even higher current output. Even further, stacking one chip on top of another, both electrically and physically, has been done. Generally it can also be used for interfacing with a stepper motor, where the motor requires high ratings which cannot be provided by other interfacing devices. Main specifications: 500 mA rated collector current (single output) 50 V output (there is a version that supports 100 V output) Includes output flyback diodes Inputs compatible with TTL and 5-V CMOS logic Applications Typical usage of the ULN2003A is in driver circuits for relays, solenoids, lamp and LED displays, stepper motors, logic buffers and line drivers. See also Solid state relay" https://en.wikipedia.org/wiki/Front-end%20processor,"A front-end processor (FEP), or a communications processor, is a small-sized computer which interfaces to the host computer a number of networks, such as SNA, or a number of peripheral devices, such as terminals, disk units, printers and tape units. Data is transferred between the host computer and the front-end processor using a high-speed parallel interface. The front-end processor communicates with peripheral devices using slower serial interfaces, usually also through communication networks. The purpose is to off-load from the host computer the work of managing the peripheral devices, transmitting and receiving messages, packet assembly and disassembly, error detection, and error correction. Two examples are the IBM 3705 Communications Controller and the Burroughs Data Communications Processor. Sometimes FEP is synonymous with a communications controller, although the latter is not necessarily as flexible. Early communications controllers such as the IBM 270x series were hard wired, but later units were programmable devices. Front-end processor is also used in a more general sense in asymmetric multi-processor systems. The FEP is a processing device (usually a computer) which is closer to the input source than is the main processor. It performs some task such as telemetry control, data collection, reduction of raw sensor data, analysis of keyboard input, etc. Front-end processes relates to the software interface between the user (client) and the application processes (server) in the client/server architecture. The user enters input (data) into the front-end process where it is collected and processed in such a way that it conforms to what the receiving application (back end) on the server can accept and process. As an example, the user enters a URL into a GUI (front-end process) such as Microsoft Internet Explorer. The GUI then processes the URL in such a way that the user is able to reach or access the intended web pages on the web server (application serve" https://en.wikipedia.org/wiki/Message%20switching,"In telecommunications, message switching involves messages routed in their entirety, one hop at a time. It evolved from circuit switching and was the precursor of packet switching. An example of message switching is email in which the message is sent through different intermediate servers to reach the mail server for storing. Unlike packet switching, the message is not divided into smaller units and sent independently over the network. History Western Union operated a message switching system, Plan 55-A, for processing telegrams in the 1950s. Leonard Kleinrock wrote a doctoral thesis at the Massachusetts Institute of Technology in 1962 that analyzed queueing delays in this system. Message switching was built by Collins Radio Company, Newport Beach, California, during the period 1959–1963 for sale to large airlines, banks and railroads. The original design for the ARPANET was Wesley Clark's April 1967 proposal for using Interface Message Processors to create a message switching network. After the seminal meeting at the first ACM Symposium on Operating Systems Principles in October 1967, where Roger Scantlebury presented Donald Davies work and mentioned the work of Paul Baran, Larry Roberts incorporated packet switching into the design. The SITA High-Level Network (HLN) became operational in 1969, handling data traffic for airlines in real time via a message-switched network over common carrier leased lines. It was organised to act like a packet-switching network. Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks. Each message is treated as a separate entity. Each message contains addressing information, and at each switch this information is read and the transfer path to the next switch is decided. Depending on network conditions, a conversation of several messages may not be transferred over the same path. Each message is stored (usually on hard drive due to RAM limitations) before being transmi" https://en.wikipedia.org/wiki/Seth%20Lloyd,"Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology. His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction. Biography Lloyd was born on August 2, 1960. He graduated from Phillips Academy in 1978 and received a bachelor of arts degree from Harvard College in 1982. He earned a certificate of advanced study in mathematics and a master of philosophy degree from Cambridge University in 1983 and 1984, while on a Marshall Scholarship. Lloyd was awarded a doctorate by Rockefeller University in 1988 (advisor Heinz Pagels) after submitting a thesis on Black Holes, Demons, and the Loss of Coherence: How Complex Systems Get Information, and What They Do With It. From 1988 to 1991, Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Starting in 1988, Lloyd was an external faculty member at the Santa Fe Institute for more than 30 years. In his 2006 book, Programming the Universe, Lloyd contends that the universe itself is one big quantum computer producing what we see around us, and ourselves, as it runs a cosmic program. According to Lloyd, once we understand the laws of" https://en.wikipedia.org/wiki/Marquois%20scales,"Marquois scales (also known as Marquois parallel scales or Marquois scale and triangle or military scales) are a mathematical instrument that found widespread use in Britain, particularly in military surveying, from the late 18th century to World War II. Description Invented around 1778 by Thomas Marquois, the Marquois scales consist of a right-angle triangle (with sides at a 3:1 ratio) and two rulers (each with multiple scales). The system could be used to aid precision when marking distances off scales, and to rapidly draw parallel lines a precise distance apart. Quick construction of precise parallel lines was useful in cartography and engineering (especially before the availability of graph paper) and Marquois scales were convenient in some challenging environments where larger equipment like a drawing board and T-square was impractical, such as field survey work and classrooms. Marquois scales fell out of favour among draftsmen in the early 20th century, although familiarity with their use was an entry requirement for the Royal Military Academy at Woolwich around the same time. Material Marquois scales were normally made of boxwood, though sets were sometimes made in ivory or metal. Use The triangle would be used for many regular set square operations, the rulers likewise would function as rulers, but the unique function was the 3:1 reduction ratio between measured distance and drawn line. A line is drawn along the beveled edge (the side of middle-length) of the triangle. By placing a ruler against the hypotenuse of the triangle and sliding the triangle along the ruler for 3 units of the ruler's scale, drawing another line along the beveled edge results in a parallel line with a distance of only 1 unit from the original line. Using larger distances on a ruler to draw lines smaller distances apart means that margin of error reading off the scale is reduced. Additionally, the end-state is the instruments already in place to slide the triangle again to quickly" https://en.wikipedia.org/wiki/Order%20%28mathematics%29,"Order in mathematics may refer to: Set theory Total order and partial order, a binary relation generalizing the usual ordering of numbers and of words in a dictionary Ordered set Order in Ramsey theory, uniform structures in consequence to critical set cardinality Algebra Order (group theory), the cardinality of a group or period of an element Order of a polynomial (disambiguation) Order of a square matrix, its dimension Order (ring theory), an algebraic structure Ordered group Ordered field Analysis Order (differential equation) or order of highest derivative, of a differential equation Leading-order terms NURBS order, a number one greater than the degree of the polynomial representation of a non-uniform rational B-spline Order of convergence, a measurement of convergence Order of derivation Order of an entire function Order of a power series, the lowest degree of its terms Ordered list, a sequence or tuple Orders of approximation in Big O notation Z-order (curve), a space-filling curve Arithmetic Multiplicative order in modular arithmetic Order of operations Orders of magnitude, a class of scale or magnitude of any amount Combinatorics Order in the Josephus permutation Ordered selections and partitions of the twelvefold way in combinatorics Ordered set, a bijection, cyclic order, or permutation Unordered subset or combination Weak order of permutations Fractals Complexor, or complex order in fractals Order of extension in Lakes of Wada Order of fractal dimension (Rényi dimensions) Orders of construction in the Pythagoras tree Geometry Long-range aperiodic order, in pinwheel tiling, for instance Graphs Graph order, the number of nodes in a graph First order and second order logic of graphs Topological ordering of directed acyclic graphs Degeneracy ordering of undirected graphs Elimination ordering of chordal graphs Order, the complexity of a structure within a graph: see haven (graph theory) and bramble (graph theory) Logic In logic, model theory and" https://en.wikipedia.org/wiki/Sinc%20filter,"In signal processing, a sinc filter can refer to either a sinc-in-time filter whose impulse response is a sinc function and whose frequency response is rectangular, or to a sinc-in-frequency filter whose impulse response is rectangular and whose frequency response is a sinc function. Calling them according to which domain the filter resembles a sinc avoids confusion. If the domain if it is unspecified, sinc-in-time is often assumed, or context hopefully can infer the correct domain. Sinc-in-time Sinc-in-time is an ideal filter that removes all frequency components above a given cutoff frequency, without attenuating lower frequencies, and has linear phase response. It may thus be considered a brick-wall filter or rectangular filter. Its impulse response is a sinc function in the time domain: while its frequency response is a rectangular function: where (representing its bandwidth) is an arbitrary cutoff frequency. Its impulse response is given by the inverse Fourier transform of its frequency response: where sinc is the normalized sinc function. Brick-wall filters An idealized electronic filter with full transmission in the pass band, complete attenuation in the stop band, and abrupt transitions is known colloquially as a ""brick-wall filter"" (in reference to the shape of the transfer function). The sinc-in-time filter is a brick-wall low-pass filter, from which brick-wall band-pass filters and high-pass filters are easily constructed. The lowpass filter with brick-wall cutoff at frequency BL has impulse response and transfer function given by: The band-pass filter with lower band edge BL and upper band edge BH is just the difference of two such sinc-in-time filters (since the filters are zero phase, their magnitude responses subtract directly): The high-pass filter with lower band edge BH is just a transparent filter minus a sinc-in-time filter, which makes it clear that the Dirac delta function is the limit of a narrow-in-time sinc-in-time filter: U" https://en.wikipedia.org/wiki/Transmission%20curve,"The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters. The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere. See also Electronic filter — examples of transmission characteristics of electronic filters" https://en.wikipedia.org/wiki/Special%20input/output,"Special input/output (Special I/O or SIO) are inputs and/or outputs of a microcontroller designated to perform specialized functions or have specialized features. Specialized functions can include: Hardware interrupts, analog input or output PWM output Serial communication, such as UART, USART, SPI bus, or SerDes. External reset Switch debounce Input pull-up (or -down) resistors open collector output Pulse counting Timing pulses Some kinds of special I/O functions can sometimes be emulated with general-purpose input/output and bit banging software. See also Atari SIO" https://en.wikipedia.org/wiki/Staining,"Staining is a technique used to enhance contrast in samples, generally at the microscopic level. Stains and dyes are frequently used in histology (microscopic study of biological tissues), in cytology (microscopic study of cells), and in the medical fields of histopathology, hematology, and cytopathology that focus on the study and diagnoses of diseases at the microscopic level. Stains may be used to define biological tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells), or organelles within individual cells. In biochemistry, it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. Staining and fluorescent tagging can serve similar purposes. Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis. Light microscopes are used for viewing stained samples at high magnification, typically using bright-field or epi-fluorescence illumination. Staining is not limited to only biological materials, since it can also be used to study the structure of other materials; for example, the lamellar structures of semi-crystalline polymers or the domain structures of block copolymers. In vivo vs In vitro In vivo staining (also called vital staining or intravital staining) is the process of dyeing living tissues. By causing certain cells or structures to take on contrasting colours, their form (morphology) or position within a cell or tissue can be readily seen and studied. The usual purpose is to reveal cytological details that might otherwise not be apparent; however, staining can also reveal where certain chemicals or specific chemical reactions are taking place within cells or tissues. In vitro staining involves colouring cells or structures that have been removed from their biological context. Certain stains are often combined to reveal mo" https://en.wikipedia.org/wiki/Imaginary%20unit,"The imaginary unit or unit imaginary number () is a solution to the quadratic equation . Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of in a complex number is . Imaginary numbers are an important mathematical concept; they extend the real number system to the complex number system , in which at least one root for every nonconstant polynomial exists (see Algebraic closure and Fundamental theorem of algebra). Here, the term ""imaginary"" is used because there is no real number having a negative square. There are two complex square roots of −1: and , just as there are two complex square roots of every real number other than zero (which has one double square root). In contexts in which use of the letter is ambiguous or problematic, the letter is sometimes used instead. For example, in electrical engineering and control systems engineering, the imaginary unit is normally denoted by instead of , because is commonly used to denote electric current. Definition The imaginary number is defined solely by the property that its square is −1: With defined this way, it follows directly from algebra that and are both square roots of −1. Although the construction is called ""imaginary"", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers, by treating as an unknown quantity while manipulating an expression (and using the definition to replace any occurrence of with −1). Higher integral powers of can also be replaced with , 1, , or −1: or, equivalently, Similarly, as with any non-zero real number: As a complex number, can be represented in rectangular form as , with a zero real component and a unit imaginary component. In " https://en.wikipedia.org/wiki/Porism,"A porism is a mathematical proposition or corollary. It has been used to refer to a direct consequence of a proof, analogous to how a corollary refers to a direct consequence of a theorem. In modern usage, it is a relationship that holds for an infinite range of values but only if a certain condition is assumed, such as Steiner's porism. The term originates from three books of Euclid that have been lost. A proposition may not have been proven, so a porism may not be a theorem or true. Origins The book that talks about porisms first is Euclid's Porisms. What is known of it is in Pappus of Alexandria's Collection, who mentions it along with other geometrical treatises, and gives several lemmas necessary for understanding it. Pappus states: The porisms of all classes are neither theorems nor problems, but occupy a position intermediate between the two, so that their enunciations can be stated either as theorems or problems, and consequently some geometers think that they are theorems, and others that they are problems, being guided solely by the form of the enunciation. But it is clear from the definitions that the old geometers understood better the difference between the three classes. The older geometers regarded a theorem as directed to proving what is proposed, a problem as directed to constructing what is proposed, and finally a porism as directed to finding what is proposed (). Pappus said that the last definition was changed by certain later geometers, who defined a porism as an accidental characteristic as (to leîpon hypothései topikoû theōrḗmatos), that which falls short of a locus-theorem by a (or in its) hypothesis. Proclus pointed out that the word porism was used in two senses: one sense is that of ""corollary"", as a result unsought but seen to follow from a theorem. In the other sense, he added nothing to the definition of ""the older geometers"", except to say that the finding of the center of a circle and the finding of the greatest common measure are " https://en.wikipedia.org/wiki/List%20of%20tallest%20people,"This is a list of the tallest people, verified by Guinness World Records or other reliable sources. According to the Guinness World Records, the tallest human in recorded history was Robert Wadlow of the United States (1918–1940), who was . He received media attention in 1939 when he was measured to be the tallest man in the world, beating John Rogan's record, after reaching a height of . There are reports about even taller people but most of such claims are unverified or erroneous. Since antiquity, it has been reported about the finds of gigantic human skeletons. Originally thought to belong to mythical giants, these bones were later identified as the exaggerated remains of prehistoric animals, usually whales or elephants. Regular reports in American newspapers in the 18th and 19th centuries of giant human skeletons may have inspired the case of the ""petrified"" Cardiff Giant, a famous archaeological hoax. Men Women Disputed and unverified claims Tallest in various sports Tallest living people from various nations See also Giant Gigantism Giant human skeletons Goliath Human height Sotos syndrome List of tallest players in National Basketball Association history List of heaviest people List of the verified shortest people List of people with dwarfism" https://en.wikipedia.org/wiki/Sampling%20%28signal%20processing%29,"In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of ""samples"". A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values. A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points. The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a type of low-pass filter called a reconstruction filter. Theory Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions. For functions that vary with time, let S(t) be a continuous function (or ""signal"") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds, which is called the sampling interval or sampling period.  Then the sampled function is given by the sequence: S(nT),   for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples divided by the interval length over in which occur, thus , with the unit sample per second, sometimes referred to as hertz, for example e.g. 48 kHz is 48,000 samples per second. Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant (T), the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s(t). That math" https://en.wikipedia.org/wiki/List%20of%20computer%20graphics%20and%20descriptive%20geometry%20topics,"This is a list of computer graphics and descriptive geometry topics, by article name. 2D computer graphics 2D geometric model 3D computer graphics 3D projection Alpha compositing Anisotropic filtering Anti-aliasing Axis-aligned bounding box Axonometric projection Bézier curve Bézier surface Bicubic interpolation Bilinear interpolation Binary space partitioning Bitmap graphics editor Bounding volume Bresenham's line algorithm Bump mapping Collision detection Color space Colour banding Computational geometry Computer animation Computer-generated art Computer painting Convex hull Curvilinear perspective Cylindrical perspective Data compression Digital raster graphic Dimetric projection Distance fog Dithering Elevation Engineering drawing Flat shading Flood fill Geometric model Geometric primitive Global illumination Gouraud shading Graphical projection Graphics suite Heightfield Hidden face removal Hidden line removal High-dynamic-range rendering Isometric projection Lathe (graphics) Line drawing algorithm Linear perspective Mesh generation Motion blur Orthographic projection Orthographic projection (geometry) Orthogonal projection Perspective (graphical) Phong reflection model Phong shading Pixel shaders Polygon (computer graphics) Procedural surface Projection Projective geometry Quadtree Radiosity Raster graphics Raytracing Rendering (computer graphics) Reverse perspective Scan line rendering Scrolling Technical drawing Texture mapping Trimetric projection Vanishing point Vector graphics Vector graphics editor Vertex shaders Volume rendering Voxel See also List of geometry topics List of graphical methods Computing-related lists Mathematics-related lists" https://en.wikipedia.org/wiki/SAT%20Subject%20Test%20in%20Biology%20E/M,"The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591. On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education. Format This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions. The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test. Preparation The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a" https://en.wikipedia.org/wiki/Micro-bursting%20%28networking%29,"In computer networking, micro-bursting is a behavior seen on fast packet-switched networks, where rapid bursts of data packets are sent in quick succession, leading to periods of full line-rate transmission that can overflow packet buffers of the network stack, both in network endpoints and routers and switches inside the network. It can be mitigated by the network scheduler. In particular, micro-bursting is often caused by the use of TCP on such a network. See also Head-of-line blocking TCP pacing" https://en.wikipedia.org/wiki/Sysload%20Software,"Sysload Software, was a computer software company specializing in systems measurement, performance and capacity management solutions for servers and data centers, based in Créteil, France. It has been acquired in September 2009 by ORSYP, a computer software company specialist in workload scheduling and IT Operations Management, based in La Défense, France. History Sysload was created in 1999 as a result of the split of Groupe Loan System into two distinct entities: Loan Solutions, a developer of financial software and Sysload Software, a developer of performance management and monitoring software. As of March 31, 2022, all Sysload products are in end of life. Products The following products are developed by Sysload: SP Analyst Is a performance and diagnostic solution for physical and virtual servers. It is a productivity tool destined to IT teams to diagnose performance problems and manage server resource capacity. SP Monitor A monitoring solution for incident management and IT service availability. It aims at providing real-time management of IT infrastructure events while correlating them to business processes. SP Monitor receives and stores event data, makes correlations and groups them within customizable views which can be accessed via an ordinary web browser. SP Portal A capacity and performance reporting solution for servers and data centers to allow IT managers analyze server resource allocation within information systems. Sysload products are based on a 3-tiered (user interfaces, management modules and collection and analysis modules) architecture metric collection technology that provides detailed information on large and complex environments. Sysload software products are available for various virtualized and physical platforms including: VMware, Windows, AIX, HP-UX, Solaris, Linux, IBM i, PowerVM, etc." https://en.wikipedia.org/wiki/Computer%20compatibility,"A family of computer models is said to be compatible if certain software that runs on one of the models can also be run on all other models of the family. The computer models may differ in performance, reliability or some other characteristic. These differences may affect the outcome of the running of the software. Software compatibility Software compatibility can refer to the compatibility that a particular software has running on a particular CPU architecture such as Intel or PowerPC. Software compatibility can also refer to ability for the software to run on a particular operating system. Very rarely is a compiled software compatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system. Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system. Software incompatibility occurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on. Hardware compatibility Hardware compatibility can refer to the compatibility of computer hardware components with a particular CPU architecture, bus, motherboard or operating system. Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work with legacy components. An example is RAM chips, some of which can run at a lower (or sometimes higher) clock rate than rated. Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, much of the hardware for macOS is proprietary hardware with drivers unavailable for use in operating systems such as Linux. Free and open-sou" https://en.wikipedia.org/wiki/Downsampling%20%28signal%20processing%29,"In digital signal processing, downsampling, compression, and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both downsampling and decimation can be synonymous with compression, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction. When the process is performed on a sequence of samples of a signal or a continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph). Decimation is a term that historically means the removal of every tenth one. But in signal processing, decimation by a factor of 10 actually means keeping only every tenth sample. This factor multiplies the sampling interval or, equivalently, divides the sampling rate. For example, if compact disc audio at 44,100 samples/second is decimated by a factor of 5/4, the resulting sample rate is 35,280. A system component that performs decimation is called a decimator. Decimation by an integer factor is also called compression. Downsampling by an integer factor Rate reduction by an integer factor M can be explained as a two-step process, with an equivalent implementation that is more efficient: Reduce high-frequency signal components with a digital lowpass filter. Decimate the filtered signal by M; that is, keep only every Mth sample. Step 2 alone creates undesirable aliasing (i.e. high-frequency signal components will copy into the lower frequency band and be mistaken for lower frequencies). Step 1, when necessary, suppresses aliasing to an acceptable level. In this application, the filter is called an anti-aliasing filter, and its design is discussed below. Also see undersampling for information about decimating bandpass functions and signals. When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering, " https://en.wikipedia.org/wiki/List%20of%20taxa%20with%20candidatus%20status,"This is a list of taxa with Candidatus status. For taxa not covered by this list, see also: the GenBank taxonomy for ""effective"" names as published; the Candidatus Lists and LPSN for latinate names, some sanitized to match the Code. Phyla ""Ca. Absconditabacteria"" (previously Candidate phylum SR1) ABY1 aka OD1-ABY1, subgroup of OD1 (""Ca. Parcubacteria"") Candidate phylum AC1 ""Ca. Acetothermia"" (previously Candidate phylum OP1) ""Ca. Aerophobetes"" (previously Candidate phylum CD12 or BHI80-139) ""Ca. Aminicenantes"" (previously Candidate phylum OP8) aquifer1 aquifer2 ""Ca. Berkelbacteria"" (previously Candidate phylum ACD58) BRC1 CAB-I ""Ca. Calescamantes"" (previously Candidate phylum EM19) Candidate phylum CPR2 Candidate phylum CPR3 Candidate phylum NC10 Candidate phylum OP2 Candidate phylum RF3 Candidate phylum SAM Candidate phylum SPAM Candidate phylum TG2 Candidate phylum VC2 Candidate phylum WS1 Candidate phylum WS2 Candidate phylum WS4 Candidate phylum WYO CKC4 ""Ca. Cloacimonetes"" (previously Candidate phylum WWE1) CPR1 ""Ca. Dependentiae"" (previously Candidate phylum TM6) EM 3 ""Ca. Endomicrobia"" Stingl et al. 2005 ""Ca. Fermentibacteria"" (Hyd24-12) ""Ca. Fervidibacteria"" (previously Candidate phylum OctSpa1-106) GAL08 GAL15 GN01 GN03 GN04 GN05 GN06 GN07 GN08 GN09 GN10 GN11 GN12 GN13 GN14 GN15 GOUTA4 ""Ca. Gracilibacteria"" (previously Candidate phylum GN02, BD1-5, or BD1-5 group) Guaymas1 ""Ca. Hydrogenedentes"" (previously Candidate phylum NKB19) JL-ETNP-Z39 ""Ca. Katanobacteria"" (previously Candidate phylum WWE3) Kazan-3B-09 KD3-62 kpj58rc KSA1 KSA2 KSB1 KSB2 KSB4 ""Ca. Latescibacteria"" (previously Candidate phylum WS3) LCP-89 LD1-PA38 ""Ca. Marinamargulisbacteria"" (previously Candidate division ZB3) ""Ca. Marinimicrobia"" (previously Marine Group A or Candidate phylum SAR406) ""Ca. Melainabacteria"" ""Ca. Microgenomates"" (previously Candidate phylum OP11) ""Ca. Modulibacteria"" (previously Candidate phylum K" https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose%20formalism,"The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system. Newman and Penrose introduced the following functions as primary quantities using this tetrad: Twelve complex spin coefficients (in three groups) which describe the change in the tetrad from point to point: . Five complex functions encoding Weyl tensors in the tetrad basis: . Ten functions encoding Ricci tensors in the tetrad basis: (real); (complex). In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations. In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refe" https://en.wikipedia.org/wiki/Well-defined%20expression,"In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value. Otherwise, the expression is said to be not well defined, ill defined or ambiguous. A function is well defined if it gives the same result when the representation of the input is changed without changing the value of the input. For instance, if takes real numbers as input, and if does not equal then is not well defined (and thus not a function). The term well defined can also be used to indicate that a logical expression is unambiguous or uncontradictory. A function that is not well defined is not the same as a function that is undefined. For example, if , then even though is undefined does not mean that the function is not well defined – but simply that 0 is not in the domain of . Example Let be sets, let and ""define"" as if and if . Then is well defined if . For example, if and , then would be well defined and equal to . However, if , then would not be well defined because is ""ambiguous"" for . For example, if and , then would have to be both 0 and 1, which makes it ambiguous. As a result, the latter is not well defined and thus not a function. ""Definition"" as anticipation of definition In order to avoid the quotation marks around ""define"" in the previous simple example, the ""definition"" of could be broken down into two simple logical steps: While the definition in step 1 is formulated with the freedom of any definition and is certainly effective (without the need to classify it as ""well defined""), the assertion in step 2 has to be proved. That is, is a function if and only if , in which case – as a function – is well defined. On the other hand, if , then for an , we would have that and , which makes the binary relation not functional (as defined in Binary relation#Special types of binary relations) and thus not well defined as a function. Colloquially, the ""function"" is also called ambiguo" https://en.wikipedia.org/wiki/Killough%20platform,"A Killough platform is a three-wheel drive system that uses traditional wheels to achieve omni-directional movement without the use of omni-directional wheels (such as omni wheels/Mecanum wheels). Designed by Stephen Killough, after which the platform is named, with help from Francois Pin, wanted to achieve omni-directional movement without using the complicated six motor arrangement required to achieve a controllable three caster wheel system (one motor to control wheel rotation and one motor to control pivoting of the wheel). He first looked into solutions by other inventors that used rollers on the rims larger wheels but considered them flawed in some critical way. This led to the Killough system: With Francois Pin, who helped with the computer control and choreography aspects of the design, Killough and Pin readied a public demonstration in 1994. This led to a partnership with Cybertrax Innovative Technologies in 1996 which was developing a motorized wheelchair. By combining two the motion of two-wheel the vehicle can move in the direction of the perpendicular wheel or by rotating all the wheels in the same direction the vehicle can rotate in place. By using the resultant motion of the vector addition of the wheels a Killough platform is able to achieve omni-directional motion." https://en.wikipedia.org/wiki/List%20of%20formulae%20involving%20%CF%80,"The following is a list of significant formulae involving the mathematical constant . Many of these formulae can be found in the article Pi, or the article Approximations of . Euclidean geometry where is the circumference of a circle, is the diameter, and is the radius. More generally, where and are, respectively, the perimeter and the width of any curve of constant width. where is the area of a circle. More generally, where is the area enclosed by an ellipse with semi-major axis and semi-minor axis . where is the area between the witch of Agnesi and its asymptotic line; is the radius of the defining circle. where is the area of a squircle with minor radius , is the gamma function and is the arithmetic–geometric mean. where is the area of an epicycloid with the smaller circle of radius and the larger circle of radius (), assuming the initial point lies on the larger circle. where is the area of a rose with angular frequency () and amplitude . where is the perimeter of the lemniscate of Bernoulli with focal distance . where is the volume of a sphere and is the radius. where is the surface area of a sphere and is the radius. where is the hypervolume of a 3-sphere and is the radius. where is the surface volume of a 3-sphere and is the radius. Regular convex polygons Sum of internal angles of a regular convex polygon with sides: Area of a regular convex polygon with sides and side length : Inradius of a regular convex polygon with sides and side length : Circumradius of a regular convex polygon with sides and side length : Physics The cosmological constant: Heisenberg's uncertainty principle: Einstein's field equation of general relativity: Coulomb's law for the electric force in vacuum: Magnetic permeability of free space: Approximate period of a simple pendulum with small amplitude: Exact period of a simple pendulum with amplitude ( is the arithmetic–geometric mean): Kepler's third law of planetary motion" https://en.wikipedia.org/wiki/List%20of%20mathematics%20reference%20tables,"See also: List of reference tables Mathematics List of mathematical topics List of statistical topics List of mathematical functions List of mathematical theorems List of mathematical proofs List of matrices List of numbers List of relativistic equations List of small groups Mathematical constants Sporadic group Table of bases Table of Clebsch-Gordan coefficients Table of derivatives Table of divisors Table of integrals Table of mathematical symbols Table of prime factors Taylor series Timeline of mathematics Trigonometric identities Truth table Reference tables List" https://en.wikipedia.org/wiki/Zenzizenzizenzic,"Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x8), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde, a 16th-century Welsh physician, mathematician and writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike); he wrote that it ""doeth represent the square of squares squaredly"". History At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic, which is a German spelling of the medieval Italian word , meaning 'squared'. Since the square of a square of a number is its fourth power, Recorde used the word zenzizenzic (spelled by him as zenzizenzike) to express it. Some of the terms had prior use in Latin , and . Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube, is found in Samuel Jeake's Arithmetick Surveighed and Reviewed. Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation, Samuel Jeake gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick (1701): The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary (OED) has only one citation for it. As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED. Notation for other powers Recorde proposed three mathematical terms by which any power (that is, index or exponent) greater than 1 could be expressed: zenzic, i.e. squared; cubic; and sursolid, i.e. ra" https://en.wikipedia.org/wiki/Census%20of%20Coral%20Reefs,"The Census of Coral Reefs (CReefs) is a field project of the Census of Marine Life that surveys the biodiversity of coral reef ecosystems internationally. The project works to study what species live in coral reef ecosystems, to develop standardized protocols for studying coral reef ecosystems, and to increase access to and exchange of information about coral reefs scattered throughout the globe. The CReefs project uses the implementation of autonomous reef-monitoring structures (ARMS) to study the species that inhabit coral reefs. These structures are placed on the sea floor in areas where coral reefs exist, where they are left for one year. At the end of the year, the ARMvS is pulled to the surface, along with the species which have inhabited it, for analysis. Coral reefs are thought to be the most organically different of all marine ecosystems. Major declines in key reef ecosystems suggest a decline in reef population throughout the world due to environmental stresses. The vulnerability of coral reef ecosystems is expected to increase significantly in response to climate change. The reefs are also being threatened by induced coral bleaching, ocean acidification, sea level rise, and changing storm tracks. Reef biodiversity could be in danger of being lost before it is even documented, and researchers will be left with a limited and poor understanding of these complex ecosystems. In an attempt to enhance global understanding of reef biodiversity, the goals of the CReefs Census of Coral Reef Ecosystems were to conduct a diverse global census of coral reef ecosystems. And increase access to and exchange of coral reef data throughout the world. Because coral reefs are the most diverse and among the most threatened of all marine ecosystems, there is great justification to learn more about them." https://en.wikipedia.org/wiki/Turbo%20equalizer,"In digital communications, a turbo equalizer is a type of receiver used to receive a message corrupted by a communication channel with intersymbol interference (ISI). It approaches the performance of a maximum a posteriori (MAP) receiver via iterative message passing between a soft-in soft-out (SISO) equalizer and a SISO decoder. It is related to turbo codes in that a turbo equalizer may be considered a type of iterative decoder if the channel is viewed as a non-redundant convolutional code. The turbo equalizer is different from classic a turbo-like code, however, in that the 'channel code' adds no redundancy and therefore can only be used to remove non-gaussian noise. History Turbo codes were invented by Claude Berrou in 1990–1991. In 1993, turbo codes were introduced publicly via a paper listing authors Berrou, Glavieux, and Thitimajshima. In 1995 a novel extension of the turbo principle was applied to an equalizer by Douillard, Jézéquel, and Berrou. In particular, they formulated the ISI receiver problem as a turbo code decoding problem, where the channel is thought of as a rate 1 convolutional code and the error correction coding is the second code. In 1997, Glavieux, Laot, and Labat demonstrated that a linear equalizer could be used in a turbo equalizer framework. This discovery made turbo equalization computationally efficient enough to be applied to a wide range of applications. Overview Standard communication system overview Before discussing turbo equalizers, it is necessary to understand the basic receiver in the context of a communication system. This is the topic of this section. At the transmitter, information bits are encoded. Encoding adds redundancy by mapping the information bits to a longer bit vector – the code bit vector . The encoded bits are then interleaved. Interleaving permutes the order of the code bits resulting in bits . The main reason for doing this is to insulate the information bits from bursty noise. Next, the symbol " https://en.wikipedia.org/wiki/List%20of%20triangle%20topics,"This list of triangle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or in triangular arrays such as Pascal's triangle or triangular matrices, or concretely in physical space. It does not include metaphors like love triangle in which the word has no reference to the geometric shape. Geometry Triangle Acute and obtuse triangles Altern base Altitude (triangle) Area bisector of a triangle Angle bisector of a triangle Angle bisector theorem Apollonius point Apollonius' theorem Automedian triangle Barrow's inequality Barycentric coordinates (mathematics) Bernoulli's quadrisection problem Brocard circle Brocard points Brocard triangle Carnot's theorem (conics) Carnot's theorem (inradius, circumradius) Carnot's theorem (perpendiculars) Catalogue of Triangle Cubics Centroid Ceva's theorem Cevian Circumconic and inconic Circumscribed circle Clawson point Cleaver (geometry) Congruence (geometry) Congruent isoscelizers point Contact triangle Conway triangle notation CPCTC Delaunay triangulation de Longchamps point Desargues' theorem Droz-Farny line theorem Encyclopedia of Triangle Centers Equal incircles theorem Equal parallelians point Equidissection Equilateral triangle Euler's line Euler's theorem in geometry Erdős–Mordell inequality Exeter point Exterior angle theorem Fagnano's problem Fermat point Fermat's right triangle theorem Fuhrmann circle Fuhrmann triangle Geometric mean theorem GEOS circle Gergonne point Golden triangle (mathematics) Gossard perspector Hadley's theorem Hadwiger–Finsler inequality Heilbronn triangle problem Heptagonal triangle Heronian triangle Heron's formula Hofstadter points Hyperbolic triangle (non-Euclidean geometry) Hypotenuse Incircle and excircles of a triangle Inellipse Integer triangle Isodynamic point Isogonal conjugate Isoperimetric point Isoscel" https://en.wikipedia.org/wiki/Operations%20security,"Operations security (OPSEC) or operational security is a process that identifies critical information to determine whether friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information. The term ""operations security"" was coined by the United States military during the Vietnam War. History Vietnam In 1966, United States Admiral Ulysses Sharp established a multidisciplinary security team to investigate the failure of certain combat operations during the Vietnam War. This operation was dubbed Operation Purple Dragon, and included personnel from the National Security Agency and the Department of Defense. When the operation concluded, the Purple Dragon team codified their recommendations. They called the process ""Operations Security"" in order to distinguish the process from existing processes and ensure continued inter-agency support. NSDD 298 In 1988, President Ronald Reagan signed National Security Decision Directive (NSDD) 298. This document established the National Operations Security Program and named the Director of the National Security Agency as the executive agent for inter-agency OPSEC support. This document also established the Interagency OPSEC Support Staff (IOSS). Private-sector application The private sector has also adopted OPSEC as a defensive measure against competitive intelligence collection efforts. See also For Official Use Only – FOUO Information security Intelligence cycle security Security Security Culture Sensitive but unclassified – SBU Controlled Unclassified Information - CUI Social engineering" https://en.wikipedia.org/wiki/Contact%20region,"A Contact Region is a concept in robotics which describes the region between an object and a robot’s end effector. This is used in object manipulation planning, and with the addition of sensors built into the manipulation system, can be used to produce a surface map or contact model of the object being grasped. In Robotics For a robot to autonomously grasp an object, it is necessary for the robot to have an understanding of its own construction and movement capabilities (described through the math of inverse kinematics), and an understanding of the object to be grasped. The relationship between these two is described through a contact model, which is a set of the potential points of contact between the robot and the object being grasped. This, in turn, is used to create a more concrete mathematical representation of the grasp to be attempted, which can then be computed through path planning techniques and executed. In Mathematics Depending on the complexity of the end effector, or through usage of external sensors such as a Lidar or Depth camera, a more complex model of the planes involved in the object being grasped can be produced. In particular, sensors embedded in the fingertips of an end effector have been demonstrated to be an effective approach for producing a surface map from a given contact region. Through knowledge of the robot's position of each individual finger, the location of the sensors in each finger, and the amount of force being exerted by the object onto each sensor, points of contact can be calculated. These points of contact can then be turned into a three-dimensional ellipsis, producing a surface map of the object. Applications In hand manipulation is a typical use case. A robot hand interacts with static and deformable objects, described with soft-body dynamics. Sometimes, additional tools has to be controlled by the robot hand for example a screwdriver. Such interaction produces a complex situation in which the robot hand has similar c" https://en.wikipedia.org/wiki/Transmission%20delay,"In a network based on packet switching, transmission delay (or store-and-forward delay, also known as packetization delay or serialization delay) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link. Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits, It is given by the following formula: seconds where is the transmission delay in seconds N is the number of bits, and R is the rate of transmission (say in bits per second) Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route. See also End-to-end delay Processing delay Queuing delay Propagation delay Network delay" https://en.wikipedia.org/wiki/Radio-frequency%20sweep,"Radio frequency sweep or frequency sweep or RF sweep apply to scanning a radio frequency band for detecting signals being transmitted there. A radio receiver with an adjustable receiving frequency is used to do this. A display shows the strength of the signals received at each frequency as the receiver's frequency is modified to sweep (scan) the desired frequency band. Methods and tools A spectrum analyzer is a standard instrument used for RF sweep. It includes an electronically tunable receiver and a display. The display presents measured power (y axis) vs frequency (x axis). The power spectrum display is a two-dimensional display of measured power vs. frequency. The power may be either in linear units, or logarithmic units (dBm). Usually the logarithmic display is more useful, because it presents a larger dynamic range with better detail at each value. An RF sweep relates to a receiver which changes its frequency of operation continuously from a minimum frequency to a maximum (or from maximum to minimum). Usually the sweep is performed at a fixed, controllable rate, for example 5 MHz/sec. Some systems use frequency hopping, switching from one frequency of operation to another. One method of CDMA uses frequency hopping. Usually frequency hopping is performed in a random or pseudo-random pattern. Applications Frequency sweeps may be used by regulatory agencies to monitor the radio spectrum, to ensure that users only transmit according to their licenses. The FCC for example controls and monitors the use of the spectrum in the U.S. In testing of new electronic devices, a frequency sweep may be done to measure the performance of electronic components or systems. For example, RF oscillators are measured for phase noise, harmonics and spurious signals; computers for consumer sale are tested to avoid radio frequency interference with radio systems. Portable sweep equipment may be used to detect some types of covert listening device (bugs). In professional audio, the " https://en.wikipedia.org/wiki/Computer%20engineering,"Computer engineering (CoE or CpE) is a branch of electronic engineering and computer science that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer engineering is referred to as computer science and engineering at some universities. Computer engineers require training in electronic engineering, computer science, hardware-software integration, software design, and software engineering. It uses the techniques and principles of electrical engineering and computer science, and can encompass areas such as artificial intelligence (AI), robotics, computer networks, computer architecture and operating systems. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also on how to integrate them into the larger picture. Robotics are one of the applications of computer engineering. Computer engineering usually deals with areas including writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors. In many institutions of higher learning, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of general engineering before declaring computer engineering as their primary focus. History Comp" https://en.wikipedia.org/wiki/Dowker%E2%80%93Thistlethwaite%20notation,"In the mathematical field of knot theory, the Dowker–Thistlethwaite (DT) notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait. Definition To generate the Dowker–Thistlethwaite notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal (each crossing is visited and labelled twice), with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker–Thistlethwaite notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn. Example For example, a knot diagram may have crossings labelled with the pairs (1, 6) (3, −12) (5, 2) (7, 8) (9, −4) and (11, −10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6 −12 2 8 −4 −10. Uniqueness and counting Dowker and Thistlethwaite have proved that the notation specifies prime knots uniquely, up to reflection. In the more general case, a knot can be recovered from a Dowker–Thistlethwaite sequence, but the recovered knot may differ from the original by either being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker–Thistlethwaite notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation. The ménage problem, posed by Tait, concerns counting the number of different number sequences possible in this notation. See also Alexander–Briggs notation Conway notation Gauss notation" https://en.wikipedia.org/wiki/CyTOF,"Cytometry by time of flight, or CyTOF, is an application of mass cytometry used to quantify labeled targets on the surface and interior of single cells. CyTOF allows the quantification of multiple cellular components simultaneously using an ICP-MS detector. CyTOF takes advantage of immunolabeling to quantify proteins, carbohydrates or lipids in a cell. Targets are selected to answer a specific research question and are labeled with lanthanide metal tagged antibodies. Labeled cells are nebulized and mixed with heated argon gas to dry the cell containing particles. The sample-gas mixture is focused and ignited with an argon plasma torch.  This breaks the cells into their individual atoms and creates an ion cloud. Abundant low weight ions generated from environmental air and biological molecules are removed using a quadrupole mass analyzer. The remaining heavy ions from the antibody tags are quantified by Time-of-flight mass spectrometry. Ion abundances correlate with the amount of target per cell and can be used to infer cellular qualities. Mass spectrometry's sensitivity to detect different ions allows measurements of upwards of 50 targets per cell while avoiding issues with spectral overlap seen when using fluorescent probes. However, this sensitivity also means trace heavy metal contamination is a concern. Using large numbers of probes creates new problems in analyzing the high dimensional data generated. History In 1994 Tsutomu Nomizu and colleagues at Nagoya University performed the first mass spectrometry experiments of single cells. Nomizu realized that single cells could be nebulized, dried, and ignited in plasma to generate clouds of ions which could be detected by emission spectrometry. In this type of experiment elements such as calcium within the cell could be quantified. Inspired by Flow cytometry, in 2007 Scott D. Tanner built upon this ICP-MS with the first multiplexed assay using lanthanide metals to label DNA and cell surface markers. In 2008 Tann" https://en.wikipedia.org/wiki/Organism,"An organism () is any biological living system that functions as an individual life form. All organisms are composed of cells. The idea of organism is based on the concept of minimal functional unit of life. Three traits have been proposed to play the main role in qualification as an organism: noncompartmentability – structure that cannot be divided without its functionality loss, individuality – the entity has simultaneous holding of genetic uniqueness, genetic homogeneity and autonomy, distinctness – genetic information has to maintain open-system (a cell). Organisms include multicellular animals, plants, and fungi; or unicellular microorganisms such as protists, bacteria, and archaea. All types of organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli. Most multicellular organisms differentiate into specialized tissues and organs during their development. In 2016, a set of 355 genes from the last universal common ancestor (LUCA) of all organisms from Earth was identified. Etymology The term ""organism"" (from Greek ὀργανισμός, organismos, from ὄργανον, organon, i.e. ""instrument, implement, tool, organ of sense or apprehension"") first appeared in the English language in 1703 and took on its current definition by 1834 (Oxford English Dictionary). It is directly related to the term ""organization"". There is a long tradition of defining organisms as self-organizing beings, going back at least to Immanuel Kant's 1790 Critique of Judgment. Definitions An organism may be defined as an assembly of molecules functioning as a more or less stable whole that exhibits the properties of life. Dictionary definitions can be broad, using phrases such as ""any living structure, such as a plant, animal, fungus or bacterium, capable of growth and reproduction"". Many definitions exclude viruses and possible synthetic non-organic life forms, as viruses are dependent on the biochemical machinery of a host cell for repr" https://en.wikipedia.org/wiki/Transparent%20heating%20film,"Transparent heating film, also called transparent heating plastic or heating transparent polymer film is a thin and flexible polymer film with a conductive optical coating. Transparent heating films may be rated at 2.5kW/m at voltages below 48 volts direct current (VDC). This allows heating with secure transformers delivering voltages which will not hurt the human body. Transparent conductive polymer films may be used for heating transparent glasses. A combination with transparent SMD electronic for multipurpose applications, is also possible. It is also a variant of carbon heating film. See also Optical coating Heating film" https://en.wikipedia.org/wiki/List%20of%20works%20by%20Nicolas%20Minorsky,"List of works by Nicolas Minorsky. Books Papers Conferences Patents" https://en.wikipedia.org/wiki/Mathematical%20Models%20%28Cundy%20and%20Rollett%29,"Mathematical Models is a book on the construction of physical models of mathematical objects for educational purposes. It was written by Martyn Cundy and A. P. Rollett, and published by the Clarendon Press in 1951, with a second edition in 1961. Tarquin Publications published a third edition in 1981. The vertex configuration of a uniform polyhedron, a generalization of the Schläfli symbol that describes the pattern of polygons surrounding each vertex, was devised in this book as a way to name the Archimedean solids, and has sometimes been called the Cundy–Rollett symbol as a nod to this origin. Topics The first edition of the book had five chapters, including its introduction which discusses model-making in general and the different media and tools with which one can construct models. The media used for the constructions described in the book include ""paper, cardboard, plywood, plastics, wire, string, and sheet metal"". The second chapter concerns plane geometry, and includes material on the golden ratio, the Pythagorean theorem, dissection problems, the mathematics of paper folding, tessellations, and plane curves, which are constructed by stitching, by graphical methods, and by mechanical devices. The third chapter, and the largest part of the book, concerns polyhedron models, made from cardboard or plexiglass. It includes information about the Platonic solids, Archimedean solids, their stellations and duals, uniform polyhedron compounds, and deltahedra. The fourth chapter is on additional topics in solid geometry and curved surfaces, particularly quadrics but also including topological manifolds such as the torus, Möbius strip and Klein bottle, and physical models helping to visualize the map coloring problem on these surfaces. Also included are sphere packings. The models in this chapter are constructed as the boundaries of solid objects, via two-dimensional paper cross-sections, and by string figures. The fifth chapter, and the final one of the first editi" https://en.wikipedia.org/wiki/PCMOS,"Probabilistic complementary metal-oxide semiconductor (PCMOS) is a semiconductor manufacturing technology invented by Pr. Krishna Palem of Rice University and Director of NTU's Institute for Sustainable Nanoelectronics (ISNE). The technology hopes to compete against current CMOS technology. Proponents claim it uses one thirtieth as much electricity while running seven times faster than the current fastest technology. PCMOS-based system on a chip architectures were shown to be gains that are as high as a substantial multiplicative factor of 560 when compared to a competing energy-efficient CMOS based realization on applications based on probabilistic algorithms such as hyper-encryption, bayesian networks, random neural networks and probabilistic cellular automata." https://en.wikipedia.org/wiki/Computer%20data%20storage,"Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as ""memory"", while slower persistent technologies are referred to as ""storage"". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Functionality Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann" https://en.wikipedia.org/wiki/Biospeleology,"Biospeleology, also known as cave biology, is a branch of biology dedicated to the study of organisms that live in caves and are collectively referred to as troglofauna. Biospeleology as a science History The first documented mention of a cave organisms dates back to 1689, with the documentation of the olm, a cave salamander. Discovered in a cave in Slovenia, in the region of Carniola, it was mistaken for a baby dragon and was recorded by Johann Weikhard von Valvasor in his work The Glory of the Duchy of Carniola. The first formal study on cave organisms was conducted on the blind cave beetle. Found in 1831 by Luka Čeč, an assistant to the lamplighter, when exploring the newly discovered inner portions of the Postojna cave system in southwestern Slovenia. The specimen was turned over to Ferdinand J. Schmidt, who described it in the paper Illyrisches Blatt (1832). He named it Leptodirus Hochenwartii after the donor, and also gave it the Slovene name drobnovratnik and the German name Enghalskäfer, both meaning ""slender-necked (beetle)"". The article represents the first formal description of a cave animal (the olm, described in 1768, wasn't recognized as a cave animal at the time). Subsequent research by Schmidt revealed further previously unknown cave inhabitants, which aroused considerable interest among natural historians. For this reason, the discovery of L. hochenwartii (along with the olm) is considered as the starting point of biospeleology as a scientific discipline. Biospeleology was formalized as a science in 1907 by Emil Racoviță with his seminal work Essai sur les problèmes biospéologiques (""Essay on biospeleological problems”). Subdivisions Organisms Categories Cave organisms fall into three basic classes: Troglobite Troglobites are obligatory cavernicoles, specialized for cave life. Some can leave caves for short periods, and may complete parts of their life cycles above ground, but cannot live their entire lives outside of a cave environment. Examp" https://en.wikipedia.org/wiki/Linear%20canonical%20transformation,"In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space. The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name ""linear canonical transformation"" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group. The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates. Definition The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix with ad − bc = 1, the corresponding integral transform from a function to is defined as Special cases Many classical transforms are special cases of the linear canonical transform: Scaling Scaling, , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks): Fourier transform The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix Fractional Fourier transform The fractional Fourier transform " https://en.wikipedia.org/wiki/Phi%20Tau%20Sigma,"Phi Tau Sigma () is the Honor Society for food science and technology. The organization was founded in at the University of Massachusetts Amherst by Dr. Gideon E. (Guy) Livingston, a food technology professor. It was incorporated under the General Laws of the Commonwealth of Massachusetts , as ""Phi Tau Sigma Honorary Society, Inc."" Greek letters designation Why the choice of to designate the Honor Society? Some have speculated or assumed that the Greek letters correspond to the initials of ""Food Technology Society"". However very recent research by Mary K. Schmidl, making use of documents retrieved from the Oregon State University archives by Robert McGorrin, including the 1958 Constitution, has elucidated the real basis of the choice. The 1958 Constitution is headed with three Greek words ""ΦΙΛΕΙΝ ΤΡΟΦΗΣ ΣΠΟΥΔΗΝ"" under which are the English words ""Devotion to the Study of Foods"". With the assistance of Petros Taoukis, the Greek words are translated as follows: ΦΙΛΕΙΝ: Love or devotion (pronounced Philleen, accent on the last syllable) ΤΡΟΦΗΣ:of Food (pronounced Trophees, accent on the last syllable) ΣΠΟΥΔΗΝ: Study (pronounced Spootheen, accent on the last syllable - th as in the word “the” or “this” not like in the word “thesis”). represent the initials of those three Greek words. Charter Members Besides Livingston, the charter members of the Honor Society were M.P. Baldorf, Robert V. Decareau, E. Felicotti, W.D. Powrie, M.A. Steinberg, and D.E. Westcott. Purposes To recognize and honor professional achievements of Food Scientists and Technologists, To encourage the application of fundamental scientific principles to Food Science and Technology in each of its branches, To stimulate the exchange of scientific knowledge through meetings, lectures, and publications, To establish and maintain a network of like-minded professionals, and To promote exclusively charitable, scientific, literary and educational programs. Members Phi Tau Sigma has (currentl" https://en.wikipedia.org/wiki/Apostolos%20Doxiadis,"Apostolos K. Doxiadis (; born 1953) is a Greek writer. He is best known for his international bestsellers Uncle Petros and Goldbach's Conjecture (2000) and Logicomix (2009). Early life Doxiadis was born in Australia, where his father, the architect Constantinos Apostolou Doxiadis was working. Soon after his birth, the family returned to Athens, where Doxiadis grew up. Though his earliest interests were in poetry, fiction and the theatre, an intense interest in mathematics led Doxiadis to leave school at age fifteen, to attend Columbia University, in New York, from which he obtained a bachelor's degree in mathematics. He then attended the École Pratique des Hautes Études in Paris from which he got a master's degree, with a thesis on the mathematical modelling of the nervous system. His father's death and family reasons made him return to Greece in 1975, interrupting his graduate studies. In Greece, although involved for some years with the computer software industry, Doxiadis returned to his childhood and adolescence loves of theatre and the cinema, before becoming a full-time writer. Work Fiction in Greek Doxiadis began to write in Greek. His first published work was A Parallel Life (Βίος Παράλληλος, 1985), a novella set in the monastic communities of 4th-century CE Egypt. His first novel, Makavettas (Μακαβέττας, 1988), recounted the adventures of a fictional power-hungry colonel at the time of the Greek military junta of 1967–1974. Written in a tongue-in-cheek imitation of Greek folk military memoirs, such as that of Yannis Makriyannis, it follows the plot of Shakespeare's Macbeth, of which the eponymous hero's name is a Hellenized form. Doxiadis next novel, Uncle Petros and Goldbach's Conjecture (Ο Θείος Πέτρος και η Εικασία του Γκόλντμπαχ, 1992), was the first long work of fiction whose plot takes place in the world of pure mathematics research. The first Greek critics did not find the mathematical themes appealing, and it received mediocre reviews, unlike Dox" https://en.wikipedia.org/wiki/Brownout%20%28software%20engineering%29,"Brownout in software engineering is a technique that involves disabling certain features of an application. Description Brownout is used to increase the robustness of an application to computing capacity shortage. If too many users are simultaneously accessing an application hosted online, the underlying computing infrastructure may become overloaded, rendering the application unresponsive. Users are likely to abandon the application and switch to competing alternatives, hence incurring long-term revenue loss. To better deal with such a situation, the application can be given brownout capabilities: The application will disable certain features – e.g., an online shop will no longer display recommendations of related products – to avoid overload. Although reducing features generally has a negative impact on the short-term revenue of the application owner, long-term revenue loss can be avoided. The technique is inspired by brownouts in power grids, which consists in reducing the power grid's voltage in case electricity demand exceeds production. Some consumers, such as incandescent light bulbs, will dim – hence originating the term – and draw less power, thus helping match demand with production. Similarly, a brownout application helps match its computing capacity requirements to what is available on the target infrastructure. Brownout complements elasticity. The former can help the application withstand short-term capacity shortage, but does so without changing the capacity available to the application. In contrast, elasticity consists of adding (or removing) capacity to the application, preferably in advance, so as to avoid capacity shortage altogether. The two techniques can be combined; e.g., brownout is triggered when the number of users increases unexpectedly until elasticity can be triggered, the latter usually requiring minutes to show an effect. Brownout is relatively non-intrusive for the developer, for example, it can be implemented as an advice in asp" https://en.wikipedia.org/wiki/Network%20processor,"A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain. Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products. History of development In modern telecommunications networks, information (voice, video, data) is transferred as packet data (termed packet switching) which is in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network (PSTN) or analog TV/Radio networks. The processing of these packets has resulted in the creation of integrated circuits (IC) that are optimised to deal with this form of packet data. Network processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks. Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed. Network processors are used in the manufacture of many different types of network equipment such as: Routers, software routers and switches (Inter-network processors) Firewalls Session border controllers Intrusion detection devices Intrusion prevention devices Network monitoring systems Network security (secure cryptoprocessors) Reconfigurable Match-Tables Reconfigurable Match-Tables were introduced in 2013 to allow switches to operate at high speeds while maintaining flexibility when it comes to the network protocols running on them, or the processing to does to them. P4 is used to program the chips. The company Barefoot Networks was based around these processors and was later" https://en.wikipedia.org/wiki/Die%20shrink,"The term die shrink (sometimes optical shrink or process shrink) refers to the scaling of metal–oxide–semiconductor (MOS) devices. The act of shrinking a die creates a somewhat identical circuit using a more advanced fabrication process, usually involving an advance of lithographic nodes. This reduces overall costs for a chip company, as the absence of major architectural changes to the processor lowers research and development costs while at the same time allowing more processor dies to be manufactured on the same piece of silicon wafer, resulting in less cost per product sold. Die shrinks are the key to lower prices and higher performance at semiconductor companies such as Samsung, Intel, TSMC, and SK Hynix, and fabless manufacturers such as AMD (including the former ATI), NVIDIA and MediaTek. Details Examples in the 2000s include the downscaling of the PlayStation 2's Emotion Engine processor from Sony and Toshiba (from 180 nm CMOS in 2000 to 90 nm CMOS in 2003), the codenamed Cedar Mill Pentium 4 processors (from 90 nm CMOS to 65 nm CMOS) and Penryn Core 2 processors (from 65 nm CMOS to 45 nm CMOS), the codenamed Brisbane Athlon 64 X2 processors (from 90 nm SOI to 65 nm SOI), various generations of GPUs from both ATI and NVIDIA, and various generations of RAM and flash memory chips from Samsung, Toshiba and SK Hynix. In January 2010, Intel released Clarkdale Core i5 and Core i7 processors fabricated with a 32 nm process, down from a previous 45 nm process used in older iterations of the Nehalem processor microarchitecture. Intel, in particular, formerly focused on leveraging die shrinks to improve product performance at a regular cadence through its Tick-Tock model. In this business model, every new microarchitecture (tock) is followed by a die shrink (tick) to improve performance with the same microarchitecture. Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor device" https://en.wikipedia.org/wiki/Continuum%20%28measurement%29,"Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states. In physics In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths). In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts. In mathematics and philosophy A good introduction to the philosophical issues involved is John Lane Bell's essa in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's. Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: ""It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis"". In social sciences, psychology and psychiatry In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those lev" https://en.wikipedia.org/wiki/Corollary,"In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else (e.g., violence as a corollary of revolutionary social changes). Overview In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof. In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines). Peirce's theory of deductive reasoning Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction: ""it is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case"" while in theorematic deduction: ""It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to t" https://en.wikipedia.org/wiki/Komornik%E2%80%93Loreti%20constant,"In the mathematical theory of non-standard positional numeral systems, the Komornik–Loreti constant is a mathematical constant that represents the smallest base q for which the number 1 has a unique representation, called its q-development. The constant is named after Vilmos Komornik and Paola Loreti, who defined it in 1998. Definition Given a real number q > 1, the series is called the q-expansion, or -expansion, of the positive real number x if, for all , , where is the floor function and need not be an integer. Any real number such that has such an expansion, as can be found using the greedy algorithm. The special case of , , and or is sometimes called a -development. gives the only 2-development. However, for almost all , there are an infinite number of different -developments. Even more surprisingly though, there exist exceptional for which there exists only a single -development. Furthermore, there is a smallest number known as the Komornik–Loreti constant for which there exists a unique -development. Value The Komornik–Loreti constant is the value such that where is the Thue–Morse sequence, i.e., is the parity of the number of 1's in the binary representation of . It has approximate value The constant is also the unique positive real root of This constant is transcendental. See also Euler-Mascheroni constant Fibonacci word Golay–Rudin–Shapiro sequence Prouhet–Thue–Morse constant" https://en.wikipedia.org/wiki/Eigenmoments,"EigenMoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes. It is obtained by performing orthogonalization, via eigen analysis on geometric moments. Framework summary EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing signal-to-noise ratio in the feature space in form of Rayleigh quotient. This approach has several benefits in Image processing applications: Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space. The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres. Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes. Nosiy components can be removed. This makes EigenMoments robust for classification applications. Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images. Problem formulation Assume that a signal vector is taken from a certain distribution having coorelation , i.e. where E[.] denotes expected value. Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality. This is performed by a two-step linear transformation: where is the transformed signal, a fixed transformation matrix which transforms the signal into the moment space, and the transformation matrix " https://en.wikipedia.org/wiki/Eukaryogenesis,"Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms. Context Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi. Symbiogenesis According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that" https://en.wikipedia.org/wiki/Kleptotype,"In taxonomy, a kleptotype is an unofficial term referring to a stolen, unrightfully displaced type specimen or part of a type specimen. Etymology The term is composed of klepto-, from the Ancient Greek (kléptō) meaning ""to steal"", and -type referring to type specimens. It translates to ""stolen type"". History During the second world war biological collections, like the herbarium in Berlin have been destroyed. This led to the loss of type specimens. In some cases only kleptotypes have survived the destruction, as the type material had been removed from their original collections. For instance, the type of Taxus celebica was thought to be destroyed during the second world war, but a kleptotype has survived the war in Stockholm. Kleptotypes have been taken by researchers, who subsequently added their unauthorised type duplicates to their own collections. Consequences Taking kleptotypes has been criticised as destructive, wasteful, and unethical. The displacement of type material complicates the work of taxonomists, as species identities may become ambiguous due to the lacking type material. It can cause problems, as researchers have to search in multiple collections to get a complete perspective on the displaced material. To combat this issue it has been proposed to weigh specimens before loaning types, and to identify loss of material through comparing the types weight upon return. Also, in some herbaria, such as the herbarium Kew, specimens are glued to the herbarium sheets to hinder the removal of plant material. However, this also makes it difficult to handle the specimens. Rules concerning type specimens The International Code of Nomenclature for algae, fungi, and plants (ICN) does not explicitly prohibit the removal of material from type specimens, however it strongly recommends to conserve the type specimens properly. It is paramount that types remain intact, as they are an irreplaceable resource and point of reference." https://en.wikipedia.org/wiki/Power%20management%20integrated%20circuit,"Power management integrated circuits (power management ICs or PMICs or PMU as unit) are integrated circuits for power management. Although PMIC refers to a wide range of chips (or modules in system-on-a-chip devices), most include several DC/DC converters or their control part. A PMIC is often included in battery-operated devices (such as mobile phone, portable media players) and embedded devices (such as routers) to decrease the amount of space required. Overview The term PMIC refers to a class of integrated circuits that perform various functions related to power requirements. A PMIC may have one or more of the following functions: DC to DC conversion Battery charging Power-source selection Voltage scaling Power sequencing Miscellaneous functions Power management ICs are solid state devices that control the flow and direction of electrical power. Many electrical devices use multiple internal voltages (e.g., 5 V, 3.3 V, 1.8 V, etc.) and sources of external power (e.g., wall outlet, battery, etc.), meaning that the power design of the device has multiple requirements for operation. A PMIC can refer to any chip that is an individual power related function, but generally refer to ICs that incorporate more than one function such as different power conversions and power controls such as voltage supervision and undervoltage protection. By incorporating these functions into one IC, a number of improvements to the overall design can be made such as better conversion efficiency, smaller solution size, and better heat dissipation. Features A PMIC may include battery management, voltage regulation, and charging functions. It may include a DC to DC converter to allow dynamic voltage scaling. Some models are known to feature up to 95% power conversion efficiency. Some models integrate with dynamic frequency scaling in a combination known as DVFS (dynamic voltage and frequency scaling). It may be manufactured using BiCMOS process. They may come as QFN package. Some mod" https://en.wikipedia.org/wiki/Thermal%20runaway,"Thermal runaway describes a process that is accelerated by increased temperature, in turn releasing energy that further increases temperature. Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback. In chemistry (and chemical engineering), thermal runaway is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering, thermal runaway is typically associated with increased current flow and power dissipation. Thermal runaway can occur in civil engineering, notably when the heat released by large amounts of curing concrete is not controlled. In astrophysics, runaway nuclear fusion reactions in stars can lead to nova and several types of supernova explosions, and also occur as a less dramatic event in the normal evolution of solar-mass stars, the ""helium flash"". Chemical engineering Chemical reactions involving thermal runaway are also called thermal explosions in chemical engineering, or runaway reactions in organic chemistry. It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents, most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship's hold, and the 1976 explosion of zoalene, in a drier, at King's Lynn. Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate. Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy. Many reactions are highly exothermic, so ma" https://en.wikipedia.org/wiki/List%20of%20periodic%20functions,"This is a list of some well-known periodic functions. The constant function , where is independent of , is periodic with any period, but lacks a fundamental period. A definition is given for some of the following functions, though each function may have many equivalent definitions. Smooth functions All trigonometric functions listed have period , unless otherwise stated. For the following trigonometric functions: is the th up/down number, is the th Bernoulli number in Jacobi elliptic functions, Non-smooth functions The following functions have period and take as their argument. The symbol is the floor function of and is the sign function. K means Elliptic integral K(m) Vector-valued functions Epitrochoid Epicycloid (special case of the epitrochoid) Limaçon (special case of the epitrochoid) Hypotrochoid Hypocycloid (special case of the hypotrochoid) Spirograph (special case of the hypotrochoid) Doubly periodic functions Jacobi's elliptic functions Weierstrass's elliptic function Notes Mathematics-related lists Types of functions" https://en.wikipedia.org/wiki/Dry%20basis,"Dry basis is an expression of the calculation in chemistry, chemical engineering and related subjects, in which the presence of water (H2O) (and/or other solvents) is neglected for the purposes of the calculation. Water (and/or other solvents) is neglected because addition and removal of water (and/or other solvents) are common processing steps, and also happen naturally through evaporation and condensation; it is frequently useful to express compositions on a dry basis to remove these effects. Example An aqueous solution containing 2 g of glucose and 2 g of fructose per 100 g of solution contains 2/100=2% glucose on a wet basis, but 2/4=50% glucose on a dry basis. If the solution had contained 2 g of glucose and 3 g of fructose, it would still have contained 2% glucose on a wet basis, but only 2/5=40% glucose on a dry basis. Frequently concentrations are calculated to a dry basis using the moisture (water) content : In the example above the glucose concentration is 2% as is and the moisture content is 96%." https://en.wikipedia.org/wiki/Commensalism,"Commensalism is a long-term biological interaction (symbiosis) in which members of one species gain benefits while those of the other species neither benefit nor are harmed. This is in contrast with mutualism, in which both organisms benefit from each other; amensalism, where one is harmed while the other is unaffected; and parasitism, where one is harmed and the other benefits. The commensal (the species that benefits from the association) may obtain nutrients, shelter, support, or locomotion from the host species, which is substantially unaffected. The commensal relation is often between a larger host and a smaller commensal; the host organism is unmodified, whereas the commensal species may show great structural adaptation consistent with its habits, as in the remoras that ride attached to sharks and other fishes. Remoras feed on their hosts' fecal matter, while pilot fish feed on the leftovers of their hosts' meals. Numerous birds perch on bodies of large mammal herbivores or feed on the insects turned up by grazing mammals. Etymology The word ""commensalism"" is derived from the word ""commensal"", meaning ""eating at the same table"" in human social interaction, which in turn comes through French from the Medieval Latin commensalis, meaning ""sharing a table"", from the prefix com-, meaning ""together"", and mensa, meaning ""table"" or ""meal"". Commensality, at the Universities of Oxford and Cambridge, refers to professors eating at the same table as students (as they live in the same ""college""). Pierre-Joseph van Beneden introduced the term ""commensalism"" in 1876. Examples of commensal relationships The commensal pathway was traveled by animals that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received little benefit or harm. Those animals that were most capable of taking advantage of the resources associ" https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics,"In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field). Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks. Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material. Stochastic processes topics This list is currently incomplete. See also :Category:Stochastic processes Basic affine jump diffusion Bernoulli process: discrete-time processes with two possible states. Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa. Bessel process Birth–death process Branching process Branching random walk Brownian bridge Brownian motion Chinese restaurant process CIR process Continuous stochastic process Cox process Dirichlet processes Finite-dimensional distribution First passage time Galton–Watson process Gamma process Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables. Gauss–Markov process (cf. below) GenI process Girsanov's theorem Hawkes process Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous. Karhunen–Loève theorem Lévy process Local time (mathematics) Loop-erased random walk Markov processes are those in which the future is conditionally independent of the past given the present. Markov chain Markov chain central limit theorem Conti" https://en.wikipedia.org/wiki/String%20art,"__notoc__ String art or pin and thread art, is characterized by an arrangement of colored thread strung between points to form geometric patterns or representational designs such as a ship's sails, sometimes with other artist material comprising the remainder of the work. Thread, wire, or string is wound around a grid of nails hammered into a velvet-covered wooden board. Though straight lines are formed by the string, the slightly different angles and metric positions at which strings intersect gives the appearance of Bézier curves (as in the mathematical concept of envelope of a family of straight lines). Quadratic Bézier curve are obtained from strings based on two intersecting segments. Other forms of string art include Spirelli, which is used for cardmaking and scrapbooking, and curve stitching, in which string is stitched through holes. String art has its origins in the 'curve stitch' activities invented by Mary Everest Boole at the end of the 19th century to make mathematical ideas more accessible to children. It was popularised as a decorative craft in the late 1960s through kits and books. A computational form of string art that can produce photo-realistic artwork was introduced by Petros Vrellis, in 2016. Gallery See also Bézier curve Envelope (mathematics) N-connectedness" https://en.wikipedia.org/wiki/Without%20loss%20of%20generality,"Without loss of generality (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as without any loss of generality or with no loss of generality) is a frequently used expression in mathematics. The term is used to indicate the assumption that follows is chosen arbitrarily, narrowing the premise to a particular case, but does not affect the validity of the proof in general. The other cases are sufficiently similar to the one presented that proving them follows by essentially the same logic. As a result, once a proof is given for the particular case, it is trivial to adapt it to prove the conclusion in all other cases. In many scenarios, the use of ""without loss of generality"" is made possible by the presence of symmetry. For example, if some property P(x,y) of real numbers is known to be symmetric in x and y, namely that P(x,y) is equivalent to P(y,x), then in proving that P(x,y) holds for every x and y, one may assume ""without loss of generality"" that x ≤ y. There is no loss of generality in this assumption, since once the case x ≤ y ⇒ P(x,y) has been proved, the other case follows by interchanging x and y : y ≤ x ⇒ P(y,x), and by symmetry of P, this implies P(x,y), thereby showing that P(x,y) holds for all cases. On the other hand, if neither such a symmetry nor another form of equivalence can be established, then the use of ""without loss of generality"" is incorrect and can amount to an instance of proof by example – a logical fallacy of proving a claim by proving a non-representative example. Example Consider the following theorem (which is a case of the pigeonhole principle): A proof: The above argument works because the exact same reasoning could be applied if the alternative assumption, namely, that the first object is blue, were made, or, similarly, that the words 'red' and 'blue' can be freely exchanged in the wording of the proof. As a result, the use of ""without loss of generality"" is valid in this case. See also Up to Mat" https://en.wikipedia.org/wiki/Neuromechanics,"Neuromechanics is an interdisciplinary field that combines biomechanics and neuroscience to understand how the nervous system interacts with the skeletal and muscular systems to enable animals to move. In a motor task, like reaching for an object, neural commands are sent to motor neurons to activate a set of muscles, called muscle synergies. Given which muscles are activated and how they are connected to the skeleton, there will be a corresponding and specific movement of the body. In addition to participating in reflexes, neuromechanical process may also be shaped through motor adaptation and learning. Neuromechanics underlying behavior Walking The inverted pendulum theory of gait is a neuromechanical approach to understand how humans walk. As the name of the theory implies, a walking human is modeled as an inverted pendulum consisting of a center of mass (COM) suspended above the ground via a support leg (Fig. 2). As the inverted pendulum swings forward, ground reaction forces occur between the modeled leg and the ground. Importantly, the magnitude of the ground reaction forces depends on the COM position and size. The velocity vector of the center of mass is always perpendicular to the ground reaction force. Walking consists of alternating single-support and double-support phases. The single-support phase occurs when one leg is in contact with the ground while the double-support phase occurs when two legs are in contact with the ground. Neurological influences The inverted pendulum is stabilized by constant feedback from the brain and can operate even in the presence of sensory loss. In animals who have lost all sensory input to the moving limb, the variables produced by gait (center of mass acceleration, velocity of animal, and position of the animal) remain constant between both groups. During postural control, delayed feedback mechanisms are used in the temporal reproduction of task-level functions such as walking. The nervous system takes into a" https://en.wikipedia.org/wiki/Pectin,"Pectin ( : ""congealed"" and ""curdled"") is a heteropolysaccharide, a structural acid contained in the primary lamella, in the middle lamella, and in the cell walls of terrestrial plants. The principal, chemical component of pectin is galacturonic acid (a sugar acid derived from galactose) which was isolated and described by Henri Braconnot in 1825. Commercially produced pectin is a white-to-light-brown powder, produced from citrus fruits for use as an edible gelling agent, especially in jams and jellies, dessert fillings, medications, and sweets; and as a food stabiliser in fruit juices and milk drinks, and as a source of dietary fiber. Biology Pectin is composed of complex polysaccharides that are present in the primary cell walls of a plant, and are abundant in the green parts of terrestrial plants. Pectin is the principal component of the middle lamella, where it binds cells. Pectin is deposited by exocytosis into the cell wall via vesicles produced in the Golgi apparatus. The amount, structure and chemical composition of pectin is different among plants, within a plant over time, and in various parts of a plant. Pectin is an important cell wall polysaccharide that allows primary cell wall extension and plant growth. During fruit ripening, pectin is broken down by the enzymes pectinase and pectinesterase, in which process the fruit becomes softer as the middle lamellae break down and cells become separated from each other. A similar process of cell separation caused by the breakdown of pectin occurs in the abscission zone of the petioles of deciduous plants at leaf fall. Pectin is a natural part of the human diet, but does not contribute significantly to nutrition. The daily intake of pectin from fruits and vegetables can be estimated to be around 5 g if approximately 500 g of fruits and vegetables are consumed per day. In human digestion, pectin binds to cholesterol in the gastrointestinal tract and slows glucose absorption by trapping carbohydrates. Pectin is" https://en.wikipedia.org/wiki/Von%20Baer%27s%20laws%20%28embryology%29,"In developmental biology, von Baer's laws of embryology (or laws of development) are four rules proposed by Karl Ernst von Baer to explain the observed pattern of embryonic development in different species. von Baer formulated the laws in his book On the Developmental History of Animals (), published in 1828, while working at the University of Königsberg. He specifically intended to rebut Johann Friedrich Meckel's 1808 recapitulation theory. According to that theory, embryos pass through successive stages that represent the adult forms of less complex organisms in the course of development, and that ultimately reflects (the great chain of being). von Baer believed that such linear development is impossible. He posited that instead of linear progression, embryos started from one or a few basic forms that are similar in different animals, and then developed in a branching pattern into increasingly different organisms. Defending his ideas, he was also opposed to Charles Darwin's 1859 theory of common ancestry and descent with modification, and particularly to Ernst Haeckel's revised recapitulation theory with its slogan ""ontogeny recapitulates phylogeny"". Darwin was however broadly supportive of von Baer's view of the relationship between embryology and evolution. The laws Von Baer described his laws in his book Über Entwickelungsgeschichte der Thiere. Beobachtung und Reflexion published in 1828. They are a series of statements generally summarised into four points, as translated by Thomas Henry Huxley in his Scientific Memoirs: The more general characters of a large group appear earlier in the embryo than the more special characters. From the most general forms the less general are developed, and so on, until finally the most special arises. Every embryo of a given animal form, instead of passing through the other forms, rather becomes separated from them. The embryo of a higher form never resembles any other form, but only its embryo. Description Von Baer " https://en.wikipedia.org/wiki/Multiplex%20baseband,"In telecommunication, the term multiplex baseband has the following meanings: In frequency-division multiplexing, the frequency band occupied by the aggregate of the signals in the line interconnecting the multiplexing and radio or line equipment. In frequency division multiplexed carrier systems, at the input to any stage of frequency translation, the frequency band occupied. For example, the output of a group multiplexer consists of a band of frequencies from 60 kHz to 108 kHz. This is the group-level baseband that results from combining 12 voice-frequency input channels, having a bandwidth of 4 kHz each, including guard bands. In turn, 5 groups are multiplexed into a super group having a baseband of 312 kHz to 552 kHz. This baseband, however, does not represent a group-level baseband. Ten super groups are in turn multiplexed into one master group, the output of which is a baseband that may be used to modulate a microwave-frequency carrier. Multiplexing Signal processing" https://en.wikipedia.org/wiki/Intraspecific%20competition,"Intraspecific competition is an interaction in population ecology, whereby members of the same species compete for limited resources. This leads to a reduction in fitness for both individuals, but the more fit individual survives and is able to reproduce. By contrast, interspecific competition occurs when members of different species compete for a shared resource. Members of the same species have rather similar requirements for resources, whereas different species have a smaller contested resource overlap, resulting in intraspecific competition generally being a stronger force than interspecific competition. Individuals can compete for food, water, space, light, mates, or any other resource which is required for survival or reproduction. The resource must be limited for competition to occur; if every member of the species can obtain a sufficient amount of every resource then individuals do not compete and the population grows exponentially. Prolonged exponential growth is rare in nature because resources are finite and so not every individual in a population can survive, leading to intraspecific competition for the scarce resources. When resources are limited, an increase in population size reduces the quantity of resources available for each individual, reducing the per capita fitness in the population. As a result, the growth rate of a population slows as intraspecific competition becomes more intense, making it a negatively density dependent process. The falling population growth rate as population increases can be modelled effectively with the logistic growth model. The rate of change of population density eventually falls to zero, the point ecologists have termed the carrying capacity (K). However, a population can only grow to a very limited number within an environment. The carrying capacity, defined by the variable k, of an environment is the maximum number of individuals or species an environment can sustain and support over a longer period of time. The r" https://en.wikipedia.org/wiki/Enriques%E2%80%93Kodaira%20classification,"In mathematics, the Enriques–Kodaira classification is a classification of compact complex surfaces into ten classes. For each of these classes, the surfaces in the class can be parametrized by a moduli space. For most of the classes the moduli spaces are well understood, but for the class of surfaces of general type the moduli spaces seem too complicated to describe explicitly, though some components are known. Max Noether began the systematic study of algebraic surfaces, and Guido Castelnuovo proved important parts of the classification. described the classification of complex projective surfaces. later extended the classification to include non-algebraic compact surfaces. The analogous classification of surfaces in positive characteristic was begun by and completed by ; it is similar to the characteristic 0 projective case, except that one also gets singular and supersingular Enriques surfaces in characteristic 2, and quasi-hyperelliptic surfaces in characteristics 2 and 3. Statement of the classification The Enriques–Kodaira classification of compact complex surfaces states that every nonsingular minimal compact complex surface is of exactly one of the 10 types listed on this page; in other words, it is one of the rational, ruled (genus > 0), type VII, K3, Enriques, Kodaira, toric, hyperelliptic, properly quasi-elliptic, or general type surfaces. For the 9 classes of surfaces other than general type, there is a fairly complete description of what all the surfaces look like (which for class VII depends on the global spherical shell conjecture, still unproved in 2009). For surfaces of general type not much is known about their explicit classification, though many examples have been found. The classification of algebraic surfaces in positive characteristic (, ) is similar to that of algebraic surfaces in characteristic 0, except that there are no Kodaira surfaces or surfaces of type VII, and there are some extra families of Enriques surfaces in characterist" https://en.wikipedia.org/wiki/Cytoplasmic%20hybrid,"A cytoplasmic hybrid (or cybrid, a portmanteau of the two words) is a eukaryotic cell line produced by the fusion of a whole cell with a cytoplast. Cytoplasts are enucleated cells. This enucleation can be effected by simultaneous application of centrifugal force and treatment of the cell with an agent that disrupts the cytoskeleton. A special case of cybrid formation involves the use of rho-zero cells as the whole cell partner in the fusion. Rho-zero cells are cells which have been depleted of their own mitochondrial DNA by prolonged incubation with ethidium bromide, a chemical which inhibits mitochondrial DNA replication. The rho-zero cells do retain mitochondria and can grow in rich culture medium with certain supplements. They do retain their own nuclear genome. A cybrid is then a hybrid cell which mixes the nuclear genes from one cell with the mitochondrial genes from another cell. Using this powerful tool, it makes it possible to dissociate contribution from the mitochondrial genes vs that of the nuclear genes. Cybrids are valuable in mitochondrial research and have been used to provide suggestive evidence of mitochondrial involvement in Alzheimer's disease, Parkinson's disease, and other conditions. Legal issues Research utilizing cybrid embryos has been hotly contested due to the ethical implications of further cybrid research. Recently, the House of Lords passed the Human Fertilisation and Embryology Act 2008, which allows the creation of mixed human-animal embryos for medical purposes only. Such cybrids are 99.9% human and 0.1% animal. A cybrid may be kept for a maximum of 14 days, owing to the development of the brain and spinal cord, after which time the cybrid must be destroyed. During the two-week period, stem cells may be harvested from the cybrid, for research or medical purposes. Under no circumstances may a cybrid be implanted into a human uterus." https://en.wikipedia.org/wiki/WSSUS%20model,"The WSSUS (Wide-Sense Stationary Uncorrelated Scattering) model provides a statistical description of the transmission behavior of wireless channels. ""Wide-sense stationarity"" means the second-order moments of the channel are stationary, which means that they depends only on the time difference, while ""uncorrelated scattering"" refers to the delay τ due to scatterers. Modelling of mobile channels as WSSUS (wide sense stationary uncorrelated scattering) has become popular among specialists. The model was introduced by Phillip A. Bello in 1963. A commonly used description of time variant channel applies the set of Bello functions and the theory of stochastic processes." https://en.wikipedia.org/wiki/Lebombo%20bone,"The Lebombo bone is a bone tool made of a baboon fibula with incised markings discovered in Border Cave in the Lebombo Mountains located between South Africa and Eswatini. Changes in the section of the notches indicate the use of different cutting edges, which the bone's discoverer, Peter Beaumont, views as evidence for their having been made, like other markings found all over the world, during participation in rituals. The bone is between 43,000 and 42,000 years old, according to 24 radiocarbon datings. This is far older than the Ishango bone with which it is sometimes confused. Other notched bones are 80,000 years old but it is unclear if the notches are merely decorative or if they bear a functional meaning. The bone has been conjectured to be a tally stick. According to The Universal Book of Mathematics the Lebombo bone's 29 notches suggest ""it may have been used as a lunar phase counter, in which case African women may have been the first mathematicians, because keeping track of menstrual cycles requires a lunar calendar"". However, the bone is broken at one end, so the 29 notches may or may not be the total number. In the cases of other notched bones since found globally, there has been no consistent notch tally, many being in the 1–10 range. See also History of mathematics Tally sticks" https://en.wikipedia.org/wiki/Location%20information%20server,"The location information server, or LIS is a network node originally defined in the National Emergency Number Association i2 network architecture that addresses the intermediate solution for providing e911 service for users of VoIP telephony. The LIS is the node that determines the location of the VoIP terminal. Beyond the NENA architecture and VoIP, the LIS is capable of providing location information to any IP device within its served access network. The role of the LIS Distributed systems for locating people and equipment will be at the heart of tomorrow's active offices. Computer and communications systems continue to proliferate in the office and home. Systems are varied and complex, involving wireless networks and mobile computers. However, systems are underused because the choices of control mechanisms and application interfaces are too diverse. It is therefore pertinent to consider which mechanisms might allow the user to manipulate systems in simple and ubiquitous ways, and how computers can be made more aware of the facilities in their surroundings. Knowledge of the location of people and equipment within an organization is such a mechanism. Annotating a resource database with location information allows location-based heuristics for control and interaction to be constructed. This approach is particularly attractive because location techniques can be devised that are physically unobtrusive and do not rely on explicit user action. The article describes the technology of a system for locating people and equipment, and the design of a distributed system service supporting access to that information. The application interfaces made possible by or that benefit from this facility are presented Location determination The method used to determine the location of a device in an access network varies between the different types of networks. For a wired network, such as Ethernet or DSL a wiremap method is common. In wiremap location determination, the locat" https://en.wikipedia.org/wiki/Link%20level,"For computer networking, Link level: In the hierarchical structure of a primary or secondary station, the conceptual level of control or data processing logic that controls the data link. Note: Link-level functions provide an interface between the station high-level logic and the data link. Link-level functions include (a) transmit bit injection and receive bit extraction, (b) address and control field interpretation, (c) command response generation, transmission and interpretation, and (d) frame check sequence computation and interpretation." https://en.wikipedia.org/wiki/Resource%20Location%20and%20Discovery%20Framing,"Resource Location and Discovery (RELOAD) is a peer-to-peer (P2P) signalling protocol for use on the Internet. A P2P signalling protocol provides its clients with an abstract storage and messaging service between a set of cooperating peers that form the overlay network. RELOAD is designed to support a peer-to-peer SIP network, but can be utilized by other applications with similar requirements by defining new usages that specify the kinds of data that must be stored for a particular application. RELOAD defines a security model based on a certificate enrollment service that provides unique identities. NAT traversal is a fundamental service of the protocol. RELOAD also allows access from ""client"" nodes that do not need to route traffic or store data for others." https://en.wikipedia.org/wiki/Ad%20hoc%20network,"An ad hoc network refers to technologies that allow network communications on an ad hoc basis. Associated technologies include: Wireless ad hoc network Mobile ad hoc network Vehicular ad hoc network Intelligent vehicular ad hoc network Protocols associated with ad hoc networking Ad hoc On-Demand Distance Vector Routing Ad Hoc Configuration Protocol Smart phone ad hoc network Ad hoc wireless distribution service" https://en.wikipedia.org/wiki/Quantum%20Aspects%20of%20Life,"Quantum Aspects of Life, a book published in 2008 with a foreword by Roger Penrose, explores the open question of the role of quantum mechanics at molecular scales of relevance to biology. The book contains chapters written by various world-experts from a 2003 symposium and includes two debates from 2003 to 2004; giving rise to a mix of both sceptical and sympathetic viewpoints. The book addresses questions of quantum physics, biophysics, nanoscience, quantum chemistry, mathematical biology, complexity theory, and philosophy that are inspired by the 1944 seminal book What Is Life? by Erwin Schrödinger. Contents Foreword by Roger Penrose Section 1: Emergence and Complexity Chapter 1: ""A Quantum Origin of Life?"" by Paul C. W. Davies Chapter 2: ""Quantum Mechanics and Emergence"" by Seth Lloyd Section 2: Quantum Mechanisms in Biology Chapter 3: ""Quantum Coherence and the Search for the First Replicator"" by Jim Al-Khalili and Johnjoe McFadden Chapter 4: ""Ultrafast Quantum Dynamics in Photosynthesis"" by Alexandra Olaya-Castro, Francesca Fassioli Olsen, Chiu Fan Lee, and Neil F. Johnson Chapter 5: ""Modeling Quantum Decoherence in Biomolecules"" by Jacques Bothma, Joel Gilmore, and Ross H. McKenzie Section 3: The Biological Evidence Chapter 6: ""Molecular Evolution: A Role for Quantum Mechanics in the Dynamics of Molecular Machines that Read and Write DNA"" by Anita Goel Chapter 7: ""Memory Depends on the Cytoskeleton, but is it Quantum?"" by Andreas Mershin and Dimitri V. Nanopoulos Chapter 8: ""Quantum Metabolism and Allometric Scaling Relations in Biology"" by Lloyd Demetrius Chapter 9: ""Spectroscopy of the Genetic Code"" by Jim D. Bashford and Peter D. Jarvis Chapter 10: ""Towards Understanding the Origin of Genetic Languages"" by Apoorva D. Patel Section 4: Artificial Quantum Life Chapter 11: ""Can Arbitrary Quantum Systems Undergo Self-Replication?"" by Arun K. Pati and Samuel L. Braunstein Chapter 12: ""A Semi-Quantum Version of the Game of Life"" by Adrian P. Flitne" https://en.wikipedia.org/wiki/Algorism,"Algorism is the technique of performing basic arithmetic by writing numbers in place value form and applying a set of memorized rules and facts to the digits. One who practices algorism is known as an algorist. This positional notation system has largely superseded earlier calculation systems that used a different set of symbols for each numerical magnitude, such as Roman numerals, and in some cases required a device such as an abacus. Etymology The word algorism comes from the name Al-Khwārizmī (c. 780–850), a Persian mathematician, astronomer, geographer and scholar in the House of Wisdom in Baghdad, whose name means ""the native of Khwarezm"", which is now in modern-day Uzbekistan. He wrote a treatise in Arabic language in the 9th century, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means ""Algoritmi on the numbers of the Indians"", where ""Algoritmi"" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra. In late medieval Latin, algorismus, the corruption of his name, simply meant the ""decimal number system"" that is still the meaning of modern English algorism. During the 17th century, the French form for the word – but not its meaning – was changed to algorithm, following the model of the word logarithm, this form alluding to the ancient Greek . English adopted the French very soon afterwards, but it wasn't until the late 19th century that ""algorithm"" took on the meaning that it has in modern English. In English, it was first used about 1230 and then by Chaucer in 1391. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins thus: which translates as: The word algorithm also derives from algorism, a generalization of the meaning to any set of rules specifying a computational procedure. Occasiona" https://en.wikipedia.org/wiki/Parameter%20space,"The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the parameter space is domain of a function. The ranges of values of the parameters may form the axes of a plot, and particular outcomes of the model may be plotted against these axes to illustrate how different regions of the parameter space produce different types of behavior in the model. In statistics, parameter spaces are particularly useful for describing parametric families of probability distributions. They also form the background for parameter estimation. In the case of extremum estimators for parametric models, a certain objective function is maximized or minimized over the parameter space. Theorems of existence and consistency of such estimators require some assumptions about the topology of the parameter space. For instance, compactness of the parameter space, together with continuity of the objective function, suffices for the existence of an extremum estimator. Examples A simple model of health deterioration after developing lung cancer could include the two parameters gender and smoker/non-smoker, in which case the parameter space is the following set of four possibilities: . The logistic map has one parameter, r, which can take any positive value. The parameter space is therefore positive real numbers. For some values of r, this function ends up cycling round a few values, or fixed on one value. These long-term values can be plotted against r in a bifurcation diagram to show the different behaviours of the function for different values of r. In a sine wave model the parameters are amplitude A > 0, angular frequency ω > 0, and phase φ ∈ S1. Thus the parameter space is In complex dynamics, the parameter space is the complex plane C = { z = x + y i : x, y ∈ R }, where i2 = −1. The famous Mandelbrot" https://en.wikipedia.org/wiki/Positive-real%20function,"Positive-real functions, often abbreviated to PR function or PRF, are a kind of mathematical function that first arose in electrical network synthesis. They are complex functions, Z(s), of a complex variable, s. A rational function is defined to have the PR property if it has a positive real part and is analytic in the right half of the complex plane and takes on real values on the real axis. In symbols the definition is, In electrical network analysis, Z(s) represents an impedance expression and s is the complex frequency variable, often expressed as its real and imaginary parts; in which terms the PR condition can be stated; The importance to network analysis of the PR condition lies in the realisability condition. Z(s) is realisable as a one-port rational impedance if and only if it meets the PR condition. Realisable in this sense means that the impedance can be constructed from a finite (hence rational) number of discrete ideal passive linear elements (resistors, inductors and capacitors in electrical terminology). Definition The term positive-real function was originally defined by Otto Brune to describe any function Z(s) which is rational (the quotient of two polynomials), is real when s is real has positive real part when s has a positive real part Many authors strictly adhere to this definition by explicitly requiring rationality, or by restricting attention to rational functions, at least in the first instance. However, a similar more general condition, not restricted to rational functions had earlier been considered by Cauer, and some authors ascribe the term positive-real to this type of condition, while others consider it to be a generalization of the basic definition. History The condition was first proposed by Wilhelm Cauer (1926) who determined that it was a necessary condition. Otto Brune (1931) coined the term positive-real for the condition and proved that it was both necessary and sufficient for realisability. Properties The sum of two " https://en.wikipedia.org/wiki/Outline%20of%20trigonometry,"The following outline is provided as an overview of and topical guide to trigonometry: Trigonometry – branch of mathematics that studies the relationships between the sides and the angles in triangles. Trigonometry defines the trigonometric functions, which describe those relationships and have applicability to cyclical phenomena, such as waves. Basics Geometry – mathematics concerned with questions of shape, size, the relative position of figures, and the properties of space. Geometry is used extensively in trigonometry. Angle – the angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane. Ratio – a ratio indicates how many times one number contains another Content of trigonometry Trigonometry Trigonometric functions Trigonometric identities Euler's formula Scholars Archimedes Aristarchus Aryabhata Bhaskara I Claudius Ptolemy Euclid Hipparchus Madhava of Sangamagrama Ptolemy Pythagoras Regiomontanus History Aristarchus's inequality Bhaskara I's sine approximation formula Greek astronomy Indian astronomy Jyā, koti-jyā and utkrama-jyā Madhava's sine table Ptolemy's table of chords Rule of marteloio Āryabhaṭa's sine table Fields Uses of trigonometry Acoustics Architecture Astronomy Biology Cartography Chemistry Civil engineering Computer graphics Cryptography Crystallography Economics Electrical engineering Electronics Game development Geodesy Mechanical engineering Medical imaging Meteorology Music theory Number theory Oceanography Optics Pharmacy Phonetics Physical science Probability theory Seismology Statistics Surveying Physics Abbe sine condition Greninger chart Phasor Snell's law Astronomy Equant Parallax Dialing scales Chemistry Greninger chart Geography, geodesy, and land surveying Hansen's problem Sn" https://en.wikipedia.org/wiki/Multilink%20striping,"Multilink striping is a type of data striping used in telecommunications to achieve higher throughput or increase the resilience of a network connection by data aggregation over multiple network links simultaneously. Multipath routing and multilink striping are often used synonymously. However, there are some differences. When applied to end-hosts, multilink striping requires multiple physical interfaces and access to multiple networks at once. On the other hand, multiple routing paths can be obtained with a single end-host interface, either within the network, or, in case of a wireless interface and multiple neighboring nodes, at the end-host itself. See also RFC 1990, The PPP Multilink Protocol (MP) Link aggregation Computer networking" https://en.wikipedia.org/wiki/List%20of%207400-series%20integrated%20circuits,"The following is a list of 7400-series digital logic integrated circuits. In the mid-1960s, the original 7400-series integrated circuits were introduced by Texas Instruments with the prefix ""SN"" to create the name SN74xx. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 7400 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers. Overview Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. A short-lived 64 prefix on Texas Instruments parts indicated an industrial temperature range; this prefix had been dropped from the TI literature by 1973. Most recent 7400-series parts are fabricated in CMOS or BiCMOS technology rather than TTL. Surface-mount parts with a single gate (often in a 5-pin or 6-pin package) are prefixed with 741G instead of 74. Some manufacturers released some 4000-series equivalent CMOS circuits with a 74 prefix, for example, the 74HC4066 was a replacement for the 4066 with slightly different electrical characteristics (different power-supply voltage ratings, higher frequency capabilities, lower ""on"" resistances in analog switches, etc.). See List of 4000-series integrated circuits. Conversely, the 4000-series has ""borrowed"" from the 7400 series such as the CD40193 and CD40161 being pin-for-pin functional replacements for 74C193 and 74C161. Older TTL parts made by manufacturers such as Signetics, Motorola, Mullard and Siemens may have different numeric prefix and numbering series entirely, such as in the European FJ family FJH101 is an 8-input NAND gate like a 7430. A few alphabetic characters to designate a specific logic subfamily may immediately follow the 74 or 54 in the part number, e.g., 74LS74 for low-power Schottky. Some CMOS parts such as 74HCT74 for high-speed CMOS wit" https://en.wikipedia.org/wiki/Software%20engineering%20professionalism,"Software engineering professionalism is a movement to make software engineering a profession, with aspects such as degree and certification programs, professional associations, professional ethics, and government licensing. The field is a licensed discipline in Texas in the United States (Texas Board of Professional Engineers, since 2013), Engineers Australia(Course Accreditation since 2001, not Licensing), and many provinces in Canada. History In 1993 the IEEE and ACM began a joint effort called JCESEP, which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers. ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing, and that licensing would give false assurances of competence even if the body of knowledge were mature. The IEEE continued to support making software engineering a branch of traditional engineering. In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association. In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term ""software engineering"" in the name of a computer science program. Concerns were raised over inappropriate use of the name ""software engineering"" to describe non-engineering programs could lead to student and public confusion, a" https://en.wikipedia.org/wiki/List%20of%20topologies,"The following is a list of named topologies or topological spaces, many of which are counterexamples in topology and related branches of mathematics. This is not a list of properties that a topology or topological space might possess; for that, see List of general topology topics and Topological property. Discrete and indiscrete Discrete topology − All subsets are open. Indiscrete topology, chaotic topology, or Trivial topology − Only the empty set and its complement are open. Cardinality and ordinals Cocountable topology Given a topological space the on is the topology having as a subbasis the union of and the family of all subsets of whose complements in are countable. Cofinite topology Double-pointed cofinite topology Ordinal number topology Pseudo-arc Ran space Tychonoff plank Finite spaces Discrete two-point space − The simplest example of a totally disconnected discrete space. Either–or topology Finite topological space Pseudocircle − A finite topological space on 4 elements that fails to satisfy any separation axiom besides T0. However, from the viewpoint of algebraic topology, it has the remarkable property that it is indistinguishable from the circle Sierpiński space, also called the connected two-point set − A 2-point set with the particular point topology Integers Arens–Fort space − A Hausdorff, regular, normal space that is not first-countable or compact. It has an element (i.e. ) for which there is no sequence in that converges to but there is a sequence in such that is a cluster point of Arithmetic progression topologies The Baire space − with the product topology, where denotes the natural numbers endowed with the discrete topology. It is the space of all sequences of natural numbers. Divisor topology Partition topology Deleted integer topology Odd–even topology Fractals and Cantor set Apollonian gasket Cantor set − A subset of the closed interval with remarkable properties. Cantor dust Cantor space" https://en.wikipedia.org/wiki/Daffodil%20Polytechnic%20Institute,"Daffodil Polytechnic Institute is a private polytechnic Institute located in Dhaka, Bangladesh.The campus is located at Dhanmondi. Daffodil Polytechnic Institute which has been functioning since 2006 to develop professionals in different fields of education and training under Bangladesh Technical Education Board (BTEB). It is the first and only polytechnic institute of the country which has been awarded internationally. Daffodil Polytechnic is one of the top ranking polytechnics in Bangladesh. https://dpi.ac Departments Currently there are eight departments: Civil Department Electrical Department Computer Science Department Textile Department Apparel Manufacturing Department Telecommunication Department Architecture and Interior Design Department Graphic Design History The polytechnic was established in 2006 with the approval of Bangladesh Technical Education Board and the Government of Bangladesh's Ministry of Education. Campuses The institute has multiple campuses within Dhaka. The main campus & the academic building 1 is located in Dhanmondi and the other campus is in Kalabagan with library and hostel facilities for both male and female students. Academics Departments Computer Engineering Technology Electrical Engineering Technology Civil Engineering Technology Architecture & Interior Design Technology Textile Engineering Technology Garments Design & Pattern Making Technology Telecommunication Engineering Technology Graphic Design Engineering Technology Principals Mohammad Nuruzzaman (31 July 2006 - 30 April 2013) K M Hasan Ripon (1 May 2013 - 31 May 2016) Wiz khalifa( 1 June 2016 - 30 April 2019) K M Hasan Ripon (1 May 2019 – Present) Online admission The Polytechnic facilitates online admission for applicants from distant areas. Clubs Kolorob Cultural Club Computer club Language club DPI Alumni Association Blood donating club Tourism club International Activities A gorup of Students from Daffodil Polytech Instit" https://en.wikipedia.org/wiki/List%20of%20large%20cardinal%20properties,"This page includes a list of cardinals with large cardinal properties. It is arranged roughly in order of the consistency strength of the axiom asserting the existence of cardinals with the given property. Existence of a cardinal number κ of a given type implies the existence of cardinals of most of the types listed above that type, and for most listed cardinal descriptions φ of lesser consistency strength, Vκ satisfies ""there is an unbounded class of cardinals satisfying φ"". The following table usually arranges cardinals in order of consistency strength, with size of the cardinal used as a tiebreaker. In a few cases (such as strongly compact cardinals) the exact consistency strength is not known and the table uses the current best guess. ""Small"" cardinals: 0, 1, 2, ..., ,..., , ... (see Aleph number) worldly cardinals weakly and strongly inaccessible, α-inaccessible, and hyper inaccessible cardinals weakly and strongly Mahlo, α-Mahlo, and hyper Mahlo cardinals. reflecting cardinals weakly compact (= Π-indescribable), Π-indescribable, totally indescribable cardinals λ-unfoldable, unfoldable cardinals, ν-indescribable cardinals and λ-shrewd, shrewd cardinals (not clear how these relate to each other). ethereal cardinals, subtle cardinals almost ineffable, ineffable, n-ineffable, totally ineffable cardinals remarkable cardinals α-Erdős cardinals (for countable α), 0# (not a cardinal), γ-iterable, γ-Erdős cardinals (for uncountable γ) almost Ramsey, Jónsson, Rowbottom, Ramsey, ineffably Ramsey, completely Ramsey, strongly Ramsey, super Ramsey cardinals measurable cardinals, 0† λ-strong, strong cardinals, tall cardinals Woodin, weakly hyper-Woodin, Shelah, hyper-Woodin cardinals superstrong cardinals (=1-superstrong; for n-superstrong for n≥2 see further down.) subcompact, strongly compact (Woodin< strongly compact≤supercompact), supercompact, hypercompact cardinals η-extendible, extendible cardinals Vopěnka cardinals, Shelah for supercompactness," https://en.wikipedia.org/wiki/Routing%20protocol,"A routing protocol specifies how routers communicate with each other to distribute information that enables them to select paths between nodes on a computer network. Routers perform the traffic directing functions on the Internet; data packets are forwarded through the networks of the internet from router to router until they reach their destination computer. Routing algorithms determine the specific choice of route. Each router has a prior knowledge only of networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. The ability of routing protocols to dynamically adjust to changing conditions such as disabled connections and components and route data around obstructions is what gives the Internet its fault tolerance and high availability. The specific characteristics of routing protocols include the manner in which they avoid routing loops, the manner in which they select preferred routes, using information about hop costs, the time they require to reach routing convergence, their scalability, and other factors such as relay multiplexing and cloud access framework parameters. Certain additional characteristics such as multilayer interfacing may also be employed as a means of distributing uncompromised networking gateways to authorized ports. This has the added benefit of preventing issues with routing protocol loops. Many routing protocols are defined in technical standards documents called RFCs. Types Although there are many types of routing protocols, three major classes are in widespread use on IP networks: Interior gateway protocols type 1, link-state routing protocols, such as OSPF and IS-IS Interior gateway protocols type 2, distance-vector routing protocols, such as Routing Information Protocol, RIPv2, IGRP. Exterior gateway protocols are routing protocols used on the Internet for exchanging routing info" https://en.wikipedia.org/wiki/Optogenetic%20methods%20to%20record%20cellular%20activity,"Optogenetics began with methods to alter neuronal activity with light, using e.g. channelrhodopsins. In a broader sense, optogenetic approaches also include the use of genetically encoded biosensors to monitor the activity of neurons or other cell types by measuring fluorescence or bioluminescence. Genetically encoded calcium indicators (GECIs) are used frequently to monitor neuronal activity, but other cellular parameters such as membrane voltage or second messenger activity can also be recorded optically. The use of optogenetic sensors is not restricted to neuroscience, but plays increasingly important roles in immunology, cardiology and cancer research. History The first experiments to measure intracellular calcium levels via protein expression were based on aequorin, a bioluminescent protein from the jellyfish Aequorea. To produce light, however, this enzyme needs the 'fuel' compound coelenteracine, which has to be added to the preparation. This is not practical in intact animals, and in addition, the temporal resolution of bioluminescence imaging is relatively poor (seconds-minutes). The first genetically encoded fluorescent calcium indicator (GECI) to be used to image activity in an animal was cameleon, designed by Atsushi Miyawaki, Roger Tsien and coworkers in 1997. Cameleon was first used successfully in an animal by Rex Kerr, William Schafer and coworkers to record from neurons and muscle cells of the nematode C. elegans. Cameleon was subsequently used to record neural activity in flies and zebrafish. In mammals, the first GECI to be used in vivo was GCaMP, first developed by Junichi Nakai and coworkers in 2001. GCaMP has undergone numerous improvements, notably by a team of scientists at the Janelia Farm Research Campus (GENIE project, HHMI), and GCaMP6 in particular has become widely used in neuroscience. Very recently, G protein-coupled receptors have been harnessed to generate a series of highly specific indicators for various neurotransmitters. Desi" https://en.wikipedia.org/wiki/List%20of%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering,"Latin and Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. Some common conventions: Intensive quantities in physics are usually denoted with minusculeswhile extensive are denoted with capital letters. Most symbols are written in italics. Vectors can be denoted in boldface. Sets of numbers are typically bold or blackboard bold. Latin Greek Other scripts Hebrew Cyrillic Japanese Modified Latin Modified Greek" https://en.wikipedia.org/wiki/RAM%20image,"A RAM image is a sequence of machine code instructions and associated data kept permanently in the non-volatile ROM memory of an embedded system, which is copied into volatile RAM by a bootstrap loader. Typically the RAM image is loaded into RAM when the system is switched on, and it contains a second-level bootstrap loader and basic hardware drivers, enabling the unit to function as desired, or else more sophisticated software to be loaded into the system. Embedded systems" https://en.wikipedia.org/wiki/Circannual%20cycle,"A circannual cycle is a biological process that occurs in living creatures over the period of approximately one year. This cycle was first discovered by Ebo Gwinner and Canadian biologist Ted Pengelley. It is classified as an Infradian rhythm, which is biological process with a period longer than that of a circadian rhythm, less than one cycle per 24 hours. These processes continue even in artificial environments in which seasonal cues have been removed by scientists. The term circannual is Latin, circa meaning approximately and annual relating to one year. Chronobiology is the field of biology pertaining to periodic rhythms that occur in living organisms in response to external stimuli such as photoperiod. Cycles come from genetic evolution in animals which allows them to create regulatory cycles to improve their fitness. Evolution for these traits comes from the increased reproductive success of animals most capable of predicting the regular changes in the environment like seasonal changes and adapt capitalize on the times when success was greatest. The idea of evolved biological clocks exists not only for animals but also in plant species which exhibit cyclic behaviors without environmental cues. Plentiful research has been done on the biological clocks and what behaviors they are responsible for in animals, circannual rhythms are just one example of a biological clock. Rhythms are driven by hormone cycles and seasonal rhythms can endure for long periods of time in animals even without photoperiod signaling which comes with seasonal changes. They are a driver of annual behaviors such as hibernation, mating and the gain or loss of weight for seasonal changes. Circannual cycles can be defined by three main aspects being that they must persist without apparent time cues, be able to be phase shifted, and should not be changed by temperature. Circannual cycles have important impacts on when animal behaviors are performed and the success of those behaviors. Circannu" https://en.wikipedia.org/wiki/List%20of%20plasma%20physicists,"This is a list of physicists who have worked in or made notable contributions to the field of plasma physics. See also Whistler (radio) waves Langmuir waves Plasma physicists Plasma physicists" https://en.wikipedia.org/wiki/List%20of%20derivatives%20and%20integrals%20in%20alternative%20calculi,"There are many alternatives to the classical calculus of Newton and Leibniz; for example, each of the infinitely many non-Newtonian calculi. Occasionally an alternative calculus is more suited than the classical calculus for expressing a given scientific or mathematical idea. The table below is intended to assist people working with the alternative calculus called the ""geometric calculus"" (or its discrete analog). Interested readers are encouraged to improve the table by inserting citations for verification, and by inserting more functions and more calculi. Table In the following table is the digamma function, is the K-function, is subfactorial, are the generalized to real numbers Bernoulli polynomials. See also Indefinite product Product integral Fractal derivative" https://en.wikipedia.org/wiki/Memory%20dependence%20prediction,"Memory dependence prediction is a technique, employed by high-performance out-of-order execution microprocessors that execute memory access operations (loads and stores) out of program order, to predict true dependencies between loads and stores at instruction execution time. With the predicted dependence information, the processor can then decide to speculatively execute certain loads and stores out of order, while preventing other loads and stores from executing out-of-order (keeping them in-order). Later in the pipeline, memory disambiguation techniques are used to determine if the loads and stores were correctly executed and, if not, to recover. By using the memory dependence predictor to keep most dependent loads and stores in order, the processor gains the benefits of aggressive out-of-order load/store execution but avoids many of the memory dependence violations that occur when loads and stores were incorrectly executed. This increases performance because it reduces the number of pipeline flushes that are required to recover from these memory dependence violations. See the memory disambiguation article for more information on memory dependencies, memory dependence violations, and recovery. In general, memory dependence prediction predicts whether two memory operations are dependent, that is, if they interact by accessing the same memory location. Besides using store to load (RAW or true) memory dependence prediction for the out-of-order scheduling of loads and stores, other applications of memory dependence prediction have been proposed. See for example. Memory dependence prediction is an optimization on top of memory dependency speculation. Sequential execution semantics imply that stores and loads appear to execute in the order specified by the program. However, as with out-of-order execution of other instructions, it may be possible to execute two memory operations in a different order from that implied by the program. This is possible when the two oper" https://en.wikipedia.org/wiki/Direct%20proof,"In mathematics and logic, a direct proof is a way of showing the truth or falsehood of a given statement by a straightforward combination of established facts, usually axioms, existing lemmas and theorems, without making any further assumptions. In order to directly prove a conditional statement of the form ""If p, then q"", it suffices to consider the situations in which the statement p is true. Logical deduction is employed to reason from assumptions to conclusion. The type of logic employed is almost invariably first-order logic, employing the quantifiers for all and there exists. Common proof rules used are modus ponens and universal instantiation. In contrast, an indirect proof may begin with certain hypothetical scenarios and then proceed to eliminate the uncertainties in each of these scenarios until an inescapable conclusion is forced. For example, instead of showing directly p ⇒ q, one proves its contrapositive ~q ⇒ ~p (one assumes ~q and shows that it leads to ~p). Since p ⇒ q and ~q ⇒ ~p are equivalent by the principle of transposition (see law of excluded middle), p ⇒ q is indirectly proved. Proof methods that are not direct include proof by contradiction, including proof by infinite descent. Direct proof methods include proof by exhaustion and proof by induction. History and etymology A direct proof is the simplest form of proof there is. The word ‘proof’ comes from the Latin word probare, which means “to test”. The earliest use of proofs was prominent in legal proceedings. A person with authority, such as a nobleman, was said to have probity, which means that the evidence was by his relative authority, which outweighed empirical testimony. In days gone by, mathematics and proof was often intertwined with practical questions – with populations like the Egyptians and the Greeks showing an interest in surveying land. This led to a natural curiosity with regards to geometry and trigonometry – particularly triangles and rectangles. These were the shapes " https://en.wikipedia.org/wiki/Passthrough%20%28electronics%29,"In signal processing, a passthrough is a logic gate that enables a signal to ""pass through"" unaltered, sometimes with little alteration. Sometimes the concept of a ""passthrough"" can also involve daisy chain logic. Examples of passthroughs Analog passthrough (for digital TV) Sega 32X (passthrough for Sega Genesis video games) VCRs, DVD recorders, etc. act as a ""pass-through"" for composite video and S-video, though sometimes they act as an RF modulator for use on Channel 3. Tape monitor features allow an AV receiver (sometime the recording device itself) to act as a ""pass-through"" for audio. An AV receiver usually allows pass-through of the video signal while amplifying the audio signal to drive speakers. See also Dongle, a device that converts signal, instead of just being a ""passthrough"" for unaltered signal Signal processing Electrical engineering de:Durchschleifen" https://en.wikipedia.org/wiki/Three-dimensional%20integrated%20circuit,"A three-dimensional integrated circuit (3D IC) is a MOS (metal-oxide semiconductor) integrated circuit (IC) manufactured by stacking as many as 16 or more ICs and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. The 3D IC is one of several 3D integration schemes that exploit the z-direction to achieve electrical performance benefits in microelectronics and nanoelectronics. 3D integrated circuits can be classified by their level of interconnect hierarchy at the global (package), intermediate (bond pad) and local (transistor) level. In general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2.5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs); 3D heterogeneous integration; and 3D systems integration.; as well as true monolithic 3D ICs International organizations such as the Jisso Technology Roadmap Committee (JIC) and the International Technology Roadmap for Semiconductors (ITRS) have worked to classify the various 3D integration technologies to further the establishment of standards and roadmaps of 3D integration. As of the 2010s, 3D ICs are widely used for NAND flash memory and in mobile devices. Types 3D ICs vs. 3D Packaging 3D packaging refers to 3D integration schemes that rely on traditional interconnection methods such as wire bonding and flip chip to achieve vertical stacking. 3D packaging can be divided into 3D system in package (3D SiP) and 3D wafer level package (3D WLP). 3D SiPs that have been in mainstream manufacturing for some time and have a well-established infrastructure include stacked memory dies interconnected with wire bonds and package on package (PoP) configurations interconnected with wire bonds or flip chip technology. PoP is used for vertically integrating dispa" https://en.wikipedia.org/wiki/Tensor%20network,"Tensor networks or tensor network states are a class of variational wave functions used in the study of many-body quantum systems. Tensor networks extend one-dimensional matrix product states to higher dimensions while preserving some of their useful mathematical properties. The wave function is encoded as a tensor contraction of a network of individual tensors. The structure of the individual tensors can impose global symmetries on the wave function (such as antisymmetry under exchange of fermions) or restrict the wave function to specific quantum numbers, like total charge, angular momentum, or spin. It is also possible to derive strict bounds on quantities like entanglement and correlation length using the mathematical structure of the tensor network. This has made tensor networks useful in theoretical studies of quantum information in many-body systems. They have also proved useful in variational studies of ground states, excited states, and dynamics of strongly correlated many-body systems. Diagrammatic notation In general, a tensor network diagram (Penrose diagram) can be viewed as a graph where nodes (or vertices) represent individual tensors, while edges represent summation over an index. Free indices are depicted as edges (or legs) attached to a single vertex only. Sometimes, there is also additional meaning to a node's shape. For instance, one can use trapezoids for unitary matrices or tensors with similar behaviour. This way, flipped trapezoids would be interpreted as complex conjugates to them. Connection to machine learning Tensor networks have been adapted for supervised learning, taking advantage of similar mathematical structure in variational studies in quantum mechanics and large-scale machine learning. This crossover has spurred collaboration between researchers in artificial intelligence and quantum information science. In June 2019, Google, the Perimeter Institute for Theoretical Physics, and X (company), released TensorNetwork, an " https://en.wikipedia.org/wiki/SACEM%20%28railway%20system%29,"The Système d'aide à la conduite, à l'exploitation et à la maintenance (SACEM) is an embedded, automatic speed train protection system for rapid transit railways. The name means ""Driver Assistance, Operation, and Maintenance System"". It was developed in France by GEC-Alsthom, Matra (now part of Siemens Mobility) and CSEE (now part of Hitachi Rail STS) in the 1980s. It was first deployed on the RER A suburban railway in Paris in 1989. Afterwards it was installed: on the Santiago Metro in Santiago, Chile; on some of the MTR lines in Hong Kong (Kwun Tong line, Tsuen Wan line, Island line, Tseung Kwan O line, Airport Express and Tung Chung line), all enhanced with ATO, on Lines A, B and 8 of the Mexico City Metro lines in Mexico City; and on Shanghai Metro Line 3. In 2017 the SACEM system in Paris was enhanced with Automatic Train Operation (ATO) and was put in full operation at the end of 2018. The SACEM system in Paris is to be enhanced to a fully fledged CBTC system named NExTEO. First to be deployed on the newly-extended line RER E in 2024, it is proposed to replace signalling and control on all RER lines. Operation The SACEM system enables a train to receive signals from devices under the tracks. A receiver in the train cabin interprets the signal, and sends data to the console so the driver can see it. A light on the console indicates the speed control setting: an orange light means slow speed, or ; a red light means full stop. If the driver alters the speed, a warning buzzer may sound. If the system determines that the speed might be unsafe, and the driver does not change it within a few seconds, SACEM engages the emergency brake. SACEM also allows for a reduction in potential train bunching and easier recovery from delays, therefore safely increasing operating frequencies as much as possible especially during rush hour." https://en.wikipedia.org/wiki/Closed%20system%20%28control%20theory%29,"The terms closed system and open system have long been defined in the widely (and long before any sort of amplifier was invented) established subject of thermodynamics, in terms that have nothing to do with the concepts of feedback and feedforward. The terms 'feedforward' and 'feedback' arose first in the 1920s in the theory of amplifier design, more recently than the thermodynamic terms. Negative feedback was eventually patented by H.S Black in 1934. In thermodynamics, an open system is one that can take in and give out ponderable matter. In thermodynamics, a closed system is one that cannot take in or give out ponderable matter, but may be able to take in or give out radiation and heat and work or any form of energy. In thermodynamics, a closed system can be further restricted, by being 'isolated': an isolated system cannot take in nor give out either ponderable matter or any form of energy. It does not make sense to try to use these well established terms to try to distinguish the presence or absence of feedback in a control system. The theory of control systems leaves room for systems with both feedforward pathways and feedback elements or pathways. The terms 'feedforward' and 'feedback' refer to elements or paths within a system, not to a system as a whole. THE input to the system comes from outside it, as energy from the signal source by way of some possibly leaky or noisy path. Part of the output of a system can be compounded, with the intermediacy of a feedback path, in some way such as addition or subtraction, with a signal derived from the system input, to form a 'return balance signal' that is input to a PART of the system to form a feedback loop within the system. (It is not correct to say that part of the output of a system can be used as THE input to the system.) There can be feedforward paths within the system in parallel with one or more of the feedback loops of the system so that the system output is a compound of the outputs of the feedback loops" https://en.wikipedia.org/wiki/Phoenix%20Union%20Bioscience%20High%20School,"Phoenix Union Bioscience High School is part of the Phoenix Union High School District, with campus in downtown Phoenix, Arizona, US. The school specialises in science education. A new building was constructed and the existing one renovated, opening in the fall of 2007. Enrollment Bioscience hosts approximately 180 freshmen through seniors. The first class of 43 students graduated from Bioscience in May 2010. 97 percent of its 10th graders passed the AIMS Math exam (in 2009), the highest public (non-charter) school percentage in the Valley, and No. 2 in the state. Their science scores were No. 3 in the state among non-charter schools. In its first year of eligibility, Bioscience earned the maximum ""Excelling"" Achievement Profile from the State. Campus The US$10 million campus which opened in October 2007 is located in Phoenix's downtown Biotechnology Center and open to students throughout the District. The Bioscience High School campus, which was designed by The Orcutt-Winslow Partnership won the American School Board Journal's Learning By Design 2009 Grand Prize Award. The school received this award for its classrooms, collaborative learning spaces, and smooth circulation. Phoenix Union High School District received a $2.4 million small schools grant from the City of Phoenix to renovate Bioscience's existing historic McKinley building for a Bio-medical program. It includes administrative office, four classrooms, a library/community room and a student demonstration area. In 2014, Bioscience ranked number 27 on the Best Education Degrees Web site's ""Most Amazing High School Campuses In The World"" list, ranked by their modern designs. The school has a solar charging station, and is partially powered by solar panels." https://en.wikipedia.org/wiki/Physical%20system,"A physical system is a collection of physical objects under study. The collection differs from a set: all the objects must coexist and have some physical relationship. In other words, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment, which is ignored except for its effects on the system. The split between system and environment is the analyst's choice, generally made to simplify the analysis. For example, the water in a lake, the water in half of a lake, or an individual molecule of water in the lake can each be considered a physical system. An isolated system is one that has negligible interaction with its environment. Often a system in this sense is chosen to correspond to the more usual meaning of system, such as a particular machine. In the study of quantum coherence, the ""system"" may refer to the microscopic properties of an object (e.g. the mean of a pendulum bob), while the relevant ""environment"" may be the internal degrees of freedom, described classically by the pendulum's thermal vibrations. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems. In control theory, a physical system being controlled (a ""controlled system"") is called a ""plant"". See also Conceptual systems Phase space Physical phenomenon Physical ontology Signal-flow graph Systems engineering Systems science Thermodynamic system Open quantum system" https://en.wikipedia.org/wiki/Semiconductor%20memory,"Semiconductor memory is a digital electronic semiconductor device used for digital data storage, such as computer memory. It typically refers to devices in which data is stored within metal–oxide–semiconductor (MOS) memory cells on a silicon integrated circuit memory chip. There are numerous different types using different semiconductor technologies. The two main types of random-access memory (RAM) are static RAM (SRAM), which uses several transistors per memory cell, and dynamic RAM (DRAM), which uses a transistor and a MOS capacitor per cell. Non-volatile memory (such as EPROM, EEPROM and flash memory) uses floating-gate memory cells, which consist of a single floating-gate transistor per cell. Most types of semiconductor memory have the property of random access, which means that it takes the same amount of time to access any memory location, so data can be efficiently accessed in any random order. This contrasts with data storage media such as hard disks and CDs which read and write data consecutively and therefore the data can only be accessed in the same sequence it was written. Semiconductor memory also has much faster access times than other types of data storage; a byte of data can be written to or read from semiconductor memory within a few nanoseconds, while access time for rotating storage such as hard disks is in the range of milliseconds. For these reasons it is used for primary storage, to hold the program and data the computer is currently working on, among other uses. , semiconductor memory chips sell annually, accounting for % of the semiconductor industry. Shift registers, processor registers, data buffers and other small digital registers that have no memory address decoding mechanism are typically not referred to as memory although they also store digital data. Description In a semiconductor memory chip, each bit of binary data is stored in a tiny circuit called a memory cell consisting of one to several transistors. The memory cells are" https://en.wikipedia.org/wiki/Plug%20compatible,"Plug compatible refers to ""hardware that is designed to perform exactly like another vendor's product."" The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers. PCM and peripherals Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices. The first example of plug-compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions. Ultimately plug-compatible products were offered for most peripherals and system main memory. PCM and computer systems A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products. One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a ""bug"" and a ""feature"", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.). PCM and IBM mainframes The or" https://en.wikipedia.org/wiki/Network%20Centric%20Product%20Support,"Network Centric Product Support (NCPS) is an early application of an Internet of Things (IoT) computer architecture developed to leverage new information technologies and global networks to assist in managing maintenance, support and supply chain of complex products made up of one or more complex systems, such as in a mobile aircraft fleet or fixed location assets such as in building systems. This is accomplished by establishing digital threads connecting the physical deployed subsystem with its design Digital Twins virtual model by embedding intelligence through networked micro-web servers that also function as a computer workstation within each subsystem component (i.e. Engine control unit on an aircraft) or other controller and enabling 2-way communications using existing Internet technologies and communications networks - thus allowing for the extension of a product lifecycle management (PLM) system into a mobile, deployed product at the subsystem level in real time. NCPS can be considered to be the support flip side of Network-centric warfare, as this approach goes beyond traditional logistics and aftermarket support functions by taking a complex adaptive system management approach and integrating field maintenance and logistics in a unified factory and field environment. Its evolution began out of insights gained by CDR Dave Loda (USNR) from Network Centric Warfare-based fleet battle experimentation at the US Naval Warfare Development Command (NWDC) in the late 1990s, who later lead commercial research efforts of NCPS in aviation at United Technologies Corporation. Interaction with the MIT Auto-ID Labs, EPCglobal, the Air Transport Association of America ATA Spec 100/iSpec 2200 and other consortium pioneering the emerging machine to machine Internet of Things (IoT) architecture contributed to the evolution of NCPS. Purpose Simply put, this architecture extends the existing World Wide Web infrastructure of networked web servers down into the product at its sub" https://en.wikipedia.org/wiki/Abstraction%20%28mathematics%29,"Abstraction in mathematics is the process of extracting the underlying structures, patterns or properties of a mathematical concept, removing any dependence on real world objects with which it might originally have been connected, and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena. Two of the most highly abstract areas of modern mathematics are category theory and model theory. Description Many areas of mathematics began with the study of real world problems, before the underlying rules and concepts were identified and defined as abstract structures. For example, geometry has its origins in the calculation of distances and areas in the real world, and algebra started with methods of solving problems in arithmetic. Abstraction is an ongoing process in mathematics and the historical development of many mathematical topics exhibits a progression from the concrete to the abstract. For example, the first steps in the abstraction of geometry were historically made by the ancient Greeks, with Euclid's Elements being the earliest extant documentation of the axioms of plane geometry—though Proclus tells of an earlier axiomatisation by Hippocrates of Chios. In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry. Further steps in abstraction were taken by Lobachevsky, Bolyai, Riemann and Gauss, who generalised the concepts of geometry to develop non-Euclidean geometries. Later in the 19th century, mathematicians generalised geometry even further, developing such areas as geometry in n dimensions, projective geometry, affine geometry and finite geometry. Finally Felix Klein's ""Erlangen program"" identified the underlying theme of all of these geometries, defining each of them as the study of properties invariant under a given group of symmetries. This level of abstraction revealed connections between geometry and abstract algebra. In mathemati" https://en.wikipedia.org/wiki/Domain-specific%20architecture,"A domain-specific architecture (DSA) is a programmable computer architecture specifically tailored to operate very efficiently within the confines of a given application domain. The term is often used in contrast to general-purpose architectures, such as CPUs, that are designed to operate on any computer program. History In conjunction with the semiconductor boom that started in the 1960s, computer architects were tasked with finding new ways to exploit the increasingly large number of transistors available. Moore's Law and Dennard Scaling enabled architects to focus on improving the performance of general-purpose microprocessors on general-purpose programs. These efforts yielded several technological innovations, such as multi-level caches, out-of-order execution, deep instruction pipelines, multithreading, and multiprocessing. The impact of these innovations was measured on generalist benchmarks such as SPEC, and architects were not concerned with the internal structure or specific characteristics of these programs. The end of Dennard Scaling pushed computer architects to switch from a single, very fast processor to several processor cores. Performance improvement could no longer be achieved by simply increasing the operating frequency of a single core. The end of Moore's Law shifted the focus away from general-purpose architectures towards more specialized hardware. Although general-purpose CPU will likely have a place in any computer system, heterogeneous systems composed of general-purpose and domain-specific components are the most recent trend for achieving high performance. While hardware accelerators and ASIC have been used in very specialized application domains since the inception of the semiconductor industry, they generally implement a specific function with very limited flexibility. In contrast, the shift towards domain-specific architectures wants to achieve a better balance of flexibility and specialization. A notable early example of a dom" https://en.wikipedia.org/wiki/Keith%20R.%20Porter%20Lecture,"This lecture, named in memory of Keith R. Porter, is presented to an eminent cell biologist each year at the ASCB Annual Meeting. The ASCB Program Committee and the ASCB President recommend the Porter Lecturer to the Porter Endowment each year. Lecturers Source: ASCB See also List of biology awards" https://en.wikipedia.org/wiki/List%20of%20commutative%20algebra%20topics,"Commutative algebra is the branch of abstract algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings, rings of algebraic integers, including the ordinary integers , and p-adic integers. Research fields Combinatorial commutative algebra Invariant theory Active research areas Serre's multiplicity conjectures Homological conjectures Basic notions Commutative ring Module (mathematics) Ring ideal, maximal ideal, prime ideal Ring homomorphism Ring monomorphism Ring epimorphism Ring isomorphism Zero divisor Chinese remainder theorem Classes of rings Field (mathematics) Algebraic number field Polynomial ring Integral domain Boolean algebra (structure) Principal ideal domain Euclidean domain Unique factorization domain Dedekind domain Nilpotent elements and reduced rings Dual numbers Tensor product of fields Tensor product of R-algebras Constructions with commutative rings Quotient ring Field of fractions Product of rings Annihilator (ring theory) Integral closure Localization and completion Completion (ring theory) Formal power series Localization of a ring Local ring Regular local ring Localization of a module Valuation (mathematics) Discrete valuation Discrete valuation ring I-adic topology Weierstrass preparation theorem Finiteness properties Noetherian ring Hilbert's basis theorem Artinian ring Ascending chain condition (ACC) and descending chain condition (DCC) Ideal theory Fractional ideal Ideal class group Radical of an ideal Hilbert's Nullstellensatz Homological properties Flat module Flat map Flat map (ring theory) Projective module Injective module Cohen-Macaulay ring Gorenstein ring Complete intersection ring Koszul complex Hilbert's syzygy theorem Quillen–Suslin theorem Dimension theory Height (ring theory) " https://en.wikipedia.org/wiki/Ethnobiology,"Ethnobiology is the scientific study of the way living things are treated or used by different human cultures. It studies the dynamic relationships between people, biota, and environments, from the distant past to the immediate present. ""People-biota-environment"" interactions around the world are documented and studied through time, across cultures, and across disciplines in a search for valid, reliable answers to two 'defining' questions: ""How and in what ways do human societies use nature, and how and in what ways do human societies view nature?"" History Beginnings (15th century–19th century) Biologists have been interested in local biological knowledge since the time Europeans started colonising the world, from the 15th century onwards. Paul Sillitoe wrote that: Local biological knowledge, collected and sampled over these early centuries significantly informed the early development of modern biology: during the 17th century Georg Eberhard Rumphius benefited from local biological knowledge in producing his catalogue, ""Herbarium Amboinense"", covering more than 1,200 species of the plants in Indonesia; during the 18th century, Carl Linnaeus relied upon Rumphius's work, and also corresponded with other people all around the world when developing the biological classification scheme that now underlies the arrangement of much of the accumulated knowledge of the biological sciences. during the 19th century, Charles Darwin, the 'father' of evolutionary theory, on his Voyage of the Beagle took interest in the local biological knowledge of peoples he encountered. Phase I (1900s–1940s) Ethnobiology itself, as a distinctive practice, only emerged during the 20th century as part of the records then being made about other peoples, and other cultures. As a practice, it was nearly always ancillary to other pursuits when documenting others' languages, folklore, and natural resource use. Roy Ellen commented that: This 'first phase' in the development of ethnobiology as a" https://en.wikipedia.org/wiki/Proteostasis,"Proteostasis is the dynamic regulation of a balanced, functional proteome. The proteostasis network includes competing and integrated biological pathways within cells that control the biogenesis, folding, trafficking, and degradation of proteins present within and outside the cell. Loss of proteostasis is central to understanding the cause of diseases associated with excessive protein misfolding and degradation leading to loss-of-function phenotypes, as well as aggregation-associated degenerative disorders. Therapeutic restoration of proteostasis may treat or resolve these pathologies. Cellular proteostasis is key to ensuring successful development, healthy aging, resistance to environmental stresses, and to minimize homeostatic perturbations from pathogens such as viruses. Cellular mechanisms for maintaining proteostasis include regulated protein translation, chaperone assisted protein folding, and protein degradation pathways. Adjusting each of these mechanisms based on the need for specific proteins is essential to maintain all cellular functions relying on a correctly folded proteome. Mechanisms of proteostasis The roles of the ribosome in proteostasis One of the first points of regulation for proteostasis is during translation. This regulation is accomplished via the structure of the ribosome, a complex central to translation. Its characteristics shape the way the protein folds, and influence the protein's future interactions. The synthesis of a new peptide chain using the ribosome is very slow; the ribosome can even be stalled when it encounters a rare codon, a codon found at low concentrations in the cell. The slow synthesis rate and any such pauses provide an individual protein domain with the necessary time to become folded before the production of subsequent domains. This facilitates the correct folding of multi-domain proteins. The newly synthesized peptide chain exits the ribosome into the cellular environment through the narrow ribosome exit chan" https://en.wikipedia.org/wiki/Engineering%20sample,"Engineering samples are the beta versions of integrated circuits that are meant to be used for compatibility qualification or as demonstrators. They are usually loaned to OEM manufacturers prior to the chip's commercial release to allow product development or display. Engineering samples are usually handed out under a non-disclosure agreement or another type of confidentiality agreement. Some engineering samples, such as Pentium 4 processors were rare and favoured for having unlocked base-clock multipliers. More recently, Core 2 engineering samples have become more common and popular. Asian sellers were selling the Core 2 processors at major profit. Some engineering samples have been put through strenuous tests. Engineering sample processors are also offered on a technical loan to some full-time employees at Intel, and are usually desktop extreme edition processors." https://en.wikipedia.org/wiki/Coreu,"COREU (French: – Telex network of European correspondents, also EUKOR-Netzwerk in Austria) is a communication network of the European Union for the communication of the Council of the European Union, the European correspondents of the foreign ministries of the EU member states, permanent representatives of member states in Brussels, the European Commission, and the General Secretariat of the Council of the European Union. The European Parliament is not among the participants. COREU is the European equivalent of the American Secret Internet Protocol Router Network (SIPRNet, also known as Intelink-S). COREU's official aim is fast communication in case of crisis. The network enables a closer cooperation in matters regarding foreign affairs. In actuality the system's function exceeds that of mere communication, it also enables decision-making. COREU's first goal is to enable the exchange of information before and after decisions. Relaying upfront negotiations in preparation of meetings is the second goal. In addition, the system also allows the editing of documents and the decision-making, especially if there is little time. While the first two goals are preparatory measures for a shared foreign policy, the third is a methodical variant marked by practise that is defining for the image of the Common Foreign and Security Policy. Members (The following information dates from 2013):* There is one representative in each of the capital cities in the EU.(since 1973) In Germany for example, this is the European correspondent (EU-KOR) from the Foreign Office. In Austria it is the European correspondent from the Referat II.1.a in the Federal Ministry for Europe, Integration and Foreign Affairs They are the correspondents (since 1982) for the European Commission They comprise the secretariat for the European Council They also make up the European External Action Service (EEAS) (responsible for foreign policy issues, since 1987) Data volume and technical details COREU fu" https://en.wikipedia.org/wiki/Reliable%20multicast,"A reliable multicast is any computer networking protocol that provides a reliable sequence of packets to multiple recipients simultaneously, making it suitable for applications such as multi-receiver file transfer. Overview Multicast is a network addressing method for the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once, creating copies only when the links to the multiple destinations split (typically network switches and routers). However, like the User Datagram Protocol, multicast does not guarantee the delivery of a message stream. Messages may be dropped, delivered multiple times, or delivered out of order. A reliable multicast protocol adds the ability for receivers to detect lost and/or out-of-order messages and take corrective action (similar in principle to TCP), resulting in a gap-free, in-order message stream. Reliability The exact meaning of reliability depends on the specific protocol instance. A minimal definition of reliable multicast is eventual delivery of all the data to all the group members, without enforcing any particular delivery order. However, not all reliable multicast protocols ensure this level of reliability; many of them trade efficiency for reliability, in different ways. For example, while TCP makes the sender responsible for transmission reliability, multicast NAK-based protocols shift the responsibility to receivers: the sender never knows for sure that all the receivers have in fact received all the data. RFC- 2887 explores the design space for bulk data transfer, with a brief discussion on the various issues and some hints at the possible different meanings of reliable. Reliable Group Data Delivery Reliable Group Data Delivery (RGDD) is a form of multicasting where an object is to be moved from a single source to a fixed set of receivers known before transmission begins. A variety of applications may need su" https://en.wikipedia.org/wiki/Brazier%20effect,"The Brazier effect was first discovered in 1927 by Brazier. He showed that when an initially straight tube was bent uniformly, the longitudinal tension and compression which resist the applied bending moment also tend to flatten or ovalise the cross-section. As the curvature increases, the flexural stiffness decreases. Brazier showed that under steadily increasing curvature the bending moment reaches a maximum value. After the bending moment reaches its maximum value, the structure becomes unstable, and so the object suddenly forms a ""kink"". From Brazier’s analysis it follows that the crushing pressure increases with the square of the curvature of the section, and thus with the square of the bending moment. See also Bending" https://en.wikipedia.org/wiki/Social%20software%20engineering,"Social software engineering (SSE) is a branch of software engineering that is concerned with the social aspects of software development and the developed software. SSE focuses on the socialness of both software engineering and developed software. On the one hand, the consideration of social factors in software engineering activities, processes and CASE tools is deemed to be useful to improve the quality of both development process and produced software. Examples include the role of situational awareness and multi-cultural factors in collaborative software development. On the other hand, the dynamicity of the social contexts in which software could operate (e.g., in a cloud environment) calls for engineering social adaptability as a runtime iterative activity. Examples include approaches which enable software to gather users' quality feedback and use it to adapt autonomously or semi-autonomously. SSE studies and builds socially-oriented tools to support collaboration and knowledge sharing in software engineering. SSE also investigates the adaptability of software to the dynamic social contexts in which it could operate and the involvement of clients and end-users in shaping software adaptation decisions at runtime. Social context includes norms, culture, roles and responsibilities, stakeholder's goals and interdependencies, end-users perception of the quality and appropriateness of each software behaviour, etc. The participants of the 1st International Workshop on Social Software Engineering and Applications (SoSEA 2008) proposed the following characterization: Community-centered: Software is produced and consumed by and/or for a community rather than focusing on individuals Collaboration/collectiveness: Exploiting the collaborative and collective capacity of human beings Companionship/relationship: Making explicit the various associations among people Human/social activities: Software is designed consciously to support human activities and to address social p" https://en.wikipedia.org/wiki/Stieltjes%20constants,"In mathematics, the Stieltjes constants are the numbers that occur in the Laurent series expansion of the Riemann zeta function: The constant is known as the Euler–Mascheroni constant. Representations The Stieltjes constants are given by the limit (In the case n = 0, the first summand requires evaluation of 00, which is taken to be 1.) Cauchy's differentiation formula leads to the integral representation Various representations in terms of integrals and infinite series are given in works of Jensen, Franel, Hermite, Hardy, Ramanujan, Ainsworth, Howell, Coppo, Connon, Coffey, Choi, Blagouchine and some other authors. In particular, Jensen-Franel's integral formula, often erroneously attributed to Ainsworth and Howell, states that where δn,k is the Kronecker symbol (Kronecker delta). Among other formulae, we find see. As concerns series representations, a famous series implying an integer part of a logarithm was given by Hardy in 1912 Israilov gave semi-convergent series in terms of Bernoulli numbers Connon, Blagouchine and Coppo gave several series with the binomial coefficients where Gn are Gregory's coefficients, also known as reciprocal logarithmic numbers (G1=+1/2, G2=−1/12, G3=+1/24, G4=−19/720,... ). More general series of the same nature include these examples and or where are the Bernoulli polynomials of the second kind and are the polynomials given by the generating equation respectively (note that ). Oloa and Tauraso showed that series with harmonic numbers may lead to Stieltjes constants Blagouchine obtained slowly-convergent series involving unsigned Stirling numbers of the first kind as well as semi-convergent series with rational terms only where m=0,1,2,... In particular, series for the first Stieltjes constant has a surprisingly simple form where Hn is the nth harmonic number. More complicated series for Stieltjes constants are given in works of Lehmer, Liang, Todd, Lavrik, Israilov, Stankus, Keiper, Nan-" https://en.wikipedia.org/wiki/List%20of%20numerical%20libraries,"This is a list of numerical libraries, which are libraries used in software development for performing numerical calculations. It is not a complete listing but is instead a list of numerical libraries with articles on Wikipedia, with few exceptions. The choice of a typical library depends on a range of requirements such as: desired features (e.g. large dimensional linear algebra, parallel computation, partial differential equations), licensing, readability of API, portability or platform/compiler dependence (e.g. Linux, Windows, Visual C++, GCC), performance, ease-of-use, continued support from developers, standard compliance, specialized optimization in code for specific application scenarios or even the size of the code-base to be installed. Multi-language C C++ Delphi ALGLIB - an open source numerical analysis library. .NET Framework languages C#, F#, VB.NET and PowerShell Fortran Java Perl Perl Data Language gives standard Perl the ability to compactly store and speedily manipulate the large N-dimensional data arrays. It can perform complex and matrix maths, and has interfaces for the GNU Scientific Library, LINPACK, PROJ, and plotting with PGPLOT. There are libraries on CPAN adding support for the linear algebra library LAPACK, the Fourier transform library FFTW, and plotting with gnuplot, and PLplot. Python Others XNUMBERS – multi-precision floating-Point computing and numerical methods for Microsoft Excel. INTLAB – interval arithmetic library for MATLAB. See also List of computer algebra systems Comparison of numerical-analysis software List of information graphics software List of numerical-analysis software List of optimization software List of statistical software" https://en.wikipedia.org/wiki/TRANZ%20330,"The TRANZ 330 is a popular point-of-sale device manufactured by VeriFone in 1985. The most common application for these units is bank and credit card processing, however, as a general purpose computer, they can perform other novel functions. Other applications include gift/benefit card processing, prepaid phone cards, payroll and employee timekeeping, and even debit and ATM cards. They are programmed in a proprietary VeriFone TCL language (Terminal Control Language), which is unrelated to the Tool Command Language used in UNIX environments. Point of sale companies Embedded systems Payment systems Banking equipment" https://en.wikipedia.org/wiki/Slot%20%28computer%20architecture%29,"A slot comprises the operation issue and data path machinery surrounding a set of one or more execution unit (also called a functional unit (FU)) which share these resources. The term slot is common for this purpose in very long instruction word (VLIW) computers, where the relationship between operation in an instruction and pipeline to execute it is explicit. In dynamically scheduled machines, the concept is more commonly called an execute pipeline. Modern conventional central processing units (CPU) have several compute pipelines, for example: two arithmetic logic units (ALU), one floating point unit (FPU), one Streaming SIMD Extensions (SSE) (such as MMX), one branch. Each of them can issue one instruction per basic instruction cycle, but can have several instructions in process. These are what correspond to slots. The pipelines may have several FUs, such as an adder and a multiplier, but only one FU in a pipeline can be issued to in a given cycle. The FU population of a pipeline (slot) is a design option in a CPU." https://en.wikipedia.org/wiki/Sum%20of%20squares,"In mathematics, statistics and elsewhere, sums of squares occur in a number of contexts: Statistics For partitioning of variance, see Partition of sums of squares For the ""sum of squared deviations"", see Least squares For the ""sum of squared differences"", see Mean squared error For the ""sum of squared error"", see Residual sum of squares For the ""sum of squares due to lack of fit"", see Lack-of-fit sum of squares For sums of squares relating to model predictions, see Explained sum of squares For sums of squares relating to observations, see Total sum of squares For sums of squared deviations, see Squared deviations from the mean For modelling involving sums of squares, see Analysis of variance For modelling involving the multivariate generalisation of sums of squares, see Multivariate analysis of variance Number theory For the sum of squares of consecutive integers, see Square pyramidal number For representing an integer as a sum of squares of 4 integers, see Lagrange's four-square theorem Legendre's three-square theorem states which numbers can be expressed as the sum of three squares Jacobi's four-square theorem gives the number of ways that a number can be represented as the sum of four squares. For the number of representations of a positive integer as a sum of squares of k integers, see Sum of squares function. Fermat's theorem on sums of two squares says which primes are sums of two squares. The sum of two squares theorem generalizes Fermat's theorem to specify which composite numbers are the sums of two squares. Pythagorean triples are sets of three integers such that the sum of the squares of the first two equals the square of the third. A Pythagorean prime is a prime that is the sum of two squares; Fermat's theorem on sums of two squares states which primes are Pythagorean primes. Pythagorean triangles with integer altitude from the hypotenuse have the sum of squares of inverses of the integer legs equal to the square of the inverse of t" https://en.wikipedia.org/wiki/Proofreading%20%28biology%29,"The term proofreading is used in genetics to refer to the error-correcting processes, first proposed by John Hopfield and Jacques Ninio, involved in DNA replication, immune system specificity, enzyme-substrate recognition among many other processes that require enhanced specificity. The proofreading mechanisms of Hopfield and Ninio are non-equilibrium active processes that consume ATP to enhance specificity of various biochemical reactions. In bacteria, all three DNA polymerases (I, II and III) have the ability to proofread, using 3’ → 5’ exonuclease activity. When an incorrect base pair is recognized, DNA polymerase reverses its direction by one base pair of DNA and excises the mismatched base. Following base excision, the polymerase can re-insert the correct base and replication can continue. In eukaryotes, only the polymerases that deal with the elongation (delta and epsilon) have proofreading ability (3’ → 5’ exonuclease activity). Proofreading also occurs in mRNA translation for protein synthesis. In this case, one mechanism is the release of any incorrect aminoacyl-tRNA before peptide bond formation. The extent of proofreading in DNA replication determines the mutation rate, and is different in different species. For example, loss of proofreading due to mutations in the DNA polymerase epsilon gene results in a hyper-mutated genotype with >100 mutations per Mbase of DNA in human colorectal cancers. The extent of proofreading in other molecular processes can depend on the effective population size of the species and the number of genes affected by the same proofreading mechanism. Bacteriophage T4 DNA polymerase Bacteriophage (phage) T4 gene 43 encodes the phage’s DNA polymerase replicative enzyme. Temperature-sensitive (ts) gene 43 mutants have been identified that have an antimutator phenotype, that is a lower rate of spontaneous mutation than wild type. Studies of one of these mutants, tsB120, showed that the DNA polymerase specified by this mutant c" https://en.wikipedia.org/wiki/List%20of%20PPAD-complete%20problems,"This is a list of PPAD-complete problems. Fixed-point theorems Sperner's lemma Brouwer fixed-point theorem Kakutani fixed-point theorem Game theory Nash equilibrium Core of Balanced Games Equilibria in game theory and economics Fisher market equilibria Arrow-Debreu equilibria Approximate Competitive Equilibrium from Equal Incomes Finding clearing payments in financial networks Graph theory Fractional stable paths problems Fractional hypergraph matching (see also the NP-complete Hypergraph matching) Fractional strong kernel Miscellaneous Scarf's lemma Fractional bounded budget connection games" https://en.wikipedia.org/wiki/List%20of%20mathematical%20functions,"In mathematics, some functions or groups of functions are important enough to deserve their own names. This is a listing of articles which explain some of these functions in more detail. There is a large theory of special functions which developed out of statistics and mathematical physics. A modern, abstract point of view contrasts large function spaces, which are infinite-dimensional and within which most functions are 'anonymous', with special functions picked out by properties such as symmetry, or relationship to harmonic analysis and group representations. See also List of types of functions Elementary functions Elementary functions are functions built from basic operations (e.g. addition, exponentials, logarithms...) Algebraic functions Algebraic functions are functions that can be expressed as the solution of a polynomial equation with integer coefficients. Polynomials: Can be generated solely by addition, multiplication, and raising to the power of a positive integer. Constant function: polynomial of degree zero, graph is a horizontal straight line Linear function: First degree polynomial, graph is a straight line. Quadratic function: Second degree polynomial, graph is a parabola. Cubic function: Third degree polynomial. Quartic function: Fourth degree polynomial. Quintic function: Fifth degree polynomial. Sextic function: Sixth degree polynomial. Rational functions: A ratio of two polynomials. nth root Square root: Yields a number whose square is the given one. Cube root: Yields a number whose cube is the given one. Elementary transcendental functions Transcendental functions are functions that are not algebraic. Exponential function: raises a fixed number to a variable power. Hyperbolic functions: formally similar to the trigonometric functions. Logarithms: the inverses of exponential functions; useful to solve equations involving exponentials. Natural logarithm Common logarithm Binary logarithm Power functions: raise a variable numb" https://en.wikipedia.org/wiki/Photonically%20Optimized%20Embedded%20Microprocessors,"The Photonically Optimized Embedded Microprocessors (POEM) is DARPA program. It should demonstrate photonic technologies that can be integrated within embedded microprocessors and enable energy-efficient high-capacity communications between the microprocessor and DRAM. For realizing POEM technology CMOS and DRAM-compatible photonic links should operate at high bit-rates with very low power dissipation. Current research Currently research in this field is at University of Colorado, Berkley University, and Nanophotonic Systems Laboratory ( Ultra-Efficient CMOS-Compatible Grating Coupler Design)." https://en.wikipedia.org/wiki/Reduction%20%28mathematics%29,"In mathematics, reduction refers to the rewriting of an expression into a simpler form. For example, the process of rewriting a fraction into one with the smallest whole-number denominator possible (while keeping the numerator a whole number) is called ""reducing a fraction"". Rewriting a radical (or ""root"") expression with the smallest possible whole number under the radical symbol is called ""reducing a radical"". Minimizing the number of radicals that appear underneath other radicals in an expression is called denesting radicals. Algebra In linear algebra, reduction refers to applying simple rules to a series of equations or matrices to change them into a simpler form. In the case of matrices, the process involves manipulating either the rows or the columns of the matrix and so is usually referred to as row-reduction or column-reduction, respectively. Often the aim of reduction is to transform a matrix into its ""row-reduced echelon form"" or ""row-echelon form""; this is the goal of Gaussian elimination. Calculus In calculus, reduction refers to using the technique of integration by parts to evaluate integrals by reducing them to simpler forms. Static (Guyan) reduction In dynamic analysis, static reduction refers to reducing the number of degrees of freedom. Static reduction can also be used in finite element analysis to refer to simplification of a linear algebraic problem. Since a static reduction requires several inversion steps it is an expensive matrix operation and is prone to some error in the solution. Consider the following system of linear equations in an FEA problem: where K and F are known and K, x and F are divided into submatrices as shown above. If F2 contains only zeros, and only x1 is desired, K can be reduced to yield the following system of equations is obtained by writing out the set of equations as follows: Equation () can be solved for (assuming invertibility of ): And substituting into () gives Thus In a similar fashion, any row or c" https://en.wikipedia.org/wiki/Convia,"Convia, Inc., based in Buffalo Grove, Illinois, is an American manufacturer of components which provide an integrated energy management platform that allows for the control and metering of lighting, plug-loads and HVAC. It is notable as one of the first companies to deliver and control power while at the same time monitoring energy and adapting its use in real-time. History In the late 1990s, Herman Miller, Inc., Convia's parent company, realized that they could not create truly flexible environments until the infrastructure of the building became more flexible. They decided that if a building infrastructure embraced technology then the applications, including systems furniture, could also take advantage of that infrastructure and become more intelligent. The need for intelligent infrastructure led Herman Miller to partner with a leading technology think tank called Applied Minds in Glendale, California and their founder Danny Hillis. Danny Hillis is considered a pioneer of the parallel computing industry and is the lead designer of Convia. Convia was launched in 2004. Partners In 2009, Herman Miller, Inc., and Legrand North America, an innovative manufacturer of electrical and network infrastructure solutions, announced a strategic alliance designed to broaden the reach of energy management strategies to fuel the adoption of flexible, sustainable spaces, ultimately reducing real estate and building operating costs while improving worker productivity. Under the terms of the agreement, technology from Herman Miller's Convia, Inc. subsidiary is embedded into Wiremold wire and cable management systems from Legrand. These include modular power and lighting distribution systems, floor boxes, poke-thru devices and architectural columns, which provide flexible, accessible power distribution to building owners and managers. Convia technology integrates a facility's power delivery and other infrastructure and technology applications, including lighting, HVAC, and occupancy" https://en.wikipedia.org/wiki/Flux%20%28biology%29,"In general, flux in biology relates to movement of a substance between compartments. There are several cases where the concept of flux is important. The movement of molecules across a membrane: in this case, flux is defined by the rate of diffusion or transport of a substance across a permeable membrane. Except in the case of active transport, net flux is directly proportional to the concentration difference across the membrane, the surface area of the membrane, and the membrane permeability constant. In ecology, flux is often considered at the ecosystem level – for instance, accurate determination of carbon fluxes using techniques like eddy covariance (at a regional and global level) is essential for modeling the causes and consequences of global warming. Metabolic flux refers to the rate of flow of metabolites through a biochemical network, along a linear metabolic pathway, or through a single enzyme. A calculation may also be made of carbon flux or flux of other elemental components of biomolecules (e.g. nitrogen). The general unit of flux is chemical mass /time (e.g., micromole/minute; mg/kg/minute). Flux rates are dependent on a number of factors, including: enzyme concentration; the concentration of precursor, product, and intermediate metabolites; post-translational modification of enzymes; and the presence of metabolic activators or repressors. Metabolic flux in biologic systems can refer to biosynthesis rates of polymers or other macromolecules, such as proteins, lipids, polynucleotides, or complex carbohydrates, as well as the flow of intermediary metabolites through pathways. Metabolic control analysis and flux balance analysis provide frameworks for understanding metabolic fluxes and their constraints. Measuring movement Flux is the net movement of particles across a specified area in a specified period of time. The particles may be ions or molecules, or they may be larger, like insects, muskrats or cars. The units of time can be anything from milli" https://en.wikipedia.org/wiki/Electronic%20engineering,"Electronic engineering is a sub-discipline of electrical engineering which emerged in the early 20th century and is distinguished by the additional use of active components such as semiconductor devices to amplify and control electric current flow. Previously electrical engineering only used passive devices such as mechanical switches, resistors, inductors, and capacitors. It covers fields such as: analog electronics, digital electronics, consumer electronics, embedded systems and power electronics. It is also involved in many related fields, for example solid-state physics, radio engineering, telecommunications, control systems, signal processing, systems engineering, computer engineering, instrumentation engineering, electric power control, photonics and robotics. The Institute of Electrical and Electronics Engineers (IEEE) is one of the most important professional bodies for electronics engineers in the US; the equivalent body in the UK is the Institution of Engineering and Technology (IET). The International Electrotechnical Commission (IEC) publishes electrical standards including those for electronics engineering. History and development Electronics engineering as a profession emerged following the identification of the electron in 1897 and the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, that inaugurated the field of electronics. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages such as radio signals from a radio antenna possible with a non-mechanical device. The growth of electronics was rapid. By the early 1920s, commercial radio broadcasting and communications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The discipline was further enhanced by the large a" https://en.wikipedia.org/wiki/Center%20of%20curvature,"In geometry, the center of curvature of a curve is found at a point that is at a distance from the curve equal to the radius of curvature lying on the normal vector. It is the point at infinity if the curvature is zero. The osculating circle to the curve is centered at the centre of curvature. Cauchy defined the center of curvature C as the intersection point of two infinitely close normal lines to the curve. The locus of centers of curvature for each point on the curve comprise the evolute of the curve. This term is generally used in physics regarding the study of lenses and mirrors (see radius of curvature (optics)). It can also be defined as the spherical distance between the point at which all the rays falling on a lens or mirror either seems to converge to (in the case of convex lenses and concave mirrors) or diverge from (in the case of concave lenses or convex mirrors) and the lens/mirror itself. See also Curvature Differential geometry of curves" https://en.wikipedia.org/wiki/Path%20integral%20formulation,"The path integral formulation is a description in quantum mechanics that generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique classical trajectory for a system with a sum, or functional integral, over an infinity of quantum-mechanically possible trajectories to compute a quantum amplitude. This formulation has proven crucial to the subsequent development of theoretical physics, because manifest Lorentz covariance (time and space components of quantities enter equations in the same way) is easier to achieve than in the operator formalism of canonical quantization. Unlike previous methods, the path integral allows one to easily change coordinates between very different canonical descriptions of the same quantum system. Another advantage is that it is in practice easier to guess the correct form of the Lagrangian of a theory, which naturally enters the path integrals (for interactions of a certain type, these are coordinate space or Feynman path integrals), than the Hamiltonian. Possible downsides of the approach include that unitarity (this is related to conservation of probability; the probabilities of all physically possible outcomes must add up to one) of the S-matrix is obscure in the formulation. The path-integral approach has proven to be equivalent to the other formalisms of quantum mechanics and quantum field theory. Thus, by deriving either approach from the other, problems associated with one or the other approach (as exemplified by Lorentz covariance or unitarity) go away. The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s, which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all poss" https://en.wikipedia.org/wiki/Immunoglobulin%20class%20switching,"Immunoglobulin class switching, also known as isotype switching, isotypic commutation or class-switch recombination (CSR), is a biological mechanism that changes a B cell's production of immunoglobulin from one type to another, such as from the isotype IgM to the isotype IgG. During this process, the constant-region portion of the antibody heavy chain is changed, but the variable region of the heavy chain stays the same (the terms variable and constant refer to changes or lack thereof between antibodies that target different epitopes). Since the variable region does not change, class switching does not affect antigen specificity. Instead, the antibody retains affinity for the same antigens, but can interact with different effector molecules. Mechanism Class switching occurs after activation of a mature B cell via its membrane-bound antibody molecule (or B cell receptor) to generate the different classes of antibody, all with the same variable domains as the original antibody generated in the immature B cell during the process of V(D)J recombination, but possessing distinct constant domains in their heavy chains. Naïve mature B cells produce both IgM and IgD, which are the first two heavy chain segments in the immunoglobulin locus. After activation by antigen, these B cells proliferate. If these activated B cells encounter specific signaling molecules via their CD40 and cytokine receptors (both modulated by T helper cells), they undergo antibody class switching to produce IgG, IgA or IgE antibodies. During class switching, the constant region of the immunoglobulin heavy chain changes but the variable regions do not, and therefore antigenic specificity, remains the same. This allows different daughter cells from the same activated B cell to produce antibodies of different isotypes or subtypes (e.g. IgG1, IgG2 etc.). In humans, the order of the heavy chain exons is as follows: μ - IgM δ - IgD γ3 - IgG3 γ1 - IgG1 α1 - IgA1 γ2 - IgG2 γ4 - IgG4 ε - IgE α2 " https://en.wikipedia.org/wiki/Mother%20of%20vinegar,"Mother of vinegar is a biofilm composed of a form of cellulose, yeast, and bacteria that sometimes develops on fermenting alcoholic liquids during the process that turns alcohol into acetic acid with the help of oxygen from the air and acetic acid bacteria (AAB). It is similar to the symbiotic culture of bacteria and yeast (SCOBY) mostly known from production of kombucha, but develops to a much lesser extent due to lesser availability of yeast, which is often no longer present in wine/cider at this stage, and a different population of bacteria. Mother of vinegar is often added to wine, cider, or other alcoholic liquids to produce vinegar at home, although only the bacteria is required, but historically has also been used in large scale production. Discovery Hermann Boerhaave was one of the first scientists to study vinegar. In the early 1700s, he showed the importance of the mother of vinegar in the acetification process, and how having an increased oxidation surface allowed for better vinegar production. He called the mother a ""vegetal substance"" or ""flower."" In 1822, South African botanist, Christian Hendrik Persoon named the mother of vinegar Mycoderma, which he believed was a fungus. He attributed the vinegar production to the Mycoderma, since it formed on the surface of wine when it has been left open to air. In 1861, Louis Pasteur made the conclusion that vinegar is made by a ""plant"" that belonged to the group Mycoderma, and not made purely by chemical oxidation of ethanol. He named the plant Mycoderma aceti. Mycoderma aceti, is a Neo-Latin expression, from the Greek μύκης (""fungus"") plus δέρμα (""skin""), and the Latin aceti (""of the acid""). Martinus Willem Beijerinck, who was a founder of modern microbiology, identified acetic acid bacteria in the mother of vinegar. He named the bacteria Acetobacter aceti in 1898. In 1935, Toshinobu Asai, a Japanese microbiologst, discovered a new genus of bacteria in the mother of vinegar, Gluconobacter. After this disc" https://en.wikipedia.org/wiki/Spectral%20density,"The power spectrum of a time series describes the distribution of power into frequency components composing that signal. According to Fourier analysis, any physical signal can be decomposed into a number of discrete frequencies, or a spectrum of frequencies over a continuous range. The statistical average of any sort of signal (including noise) as analyzed in terms of its frequency content, is called its spectrum. When the energy of the signal is concentrated around a finite time interval, especially if its total energy is finite, one may compute the energy spectral density. More commonly used is the power spectral density (or simply power spectrum), which applies to signals existing over all time, or over a time period large enough (especially in relation to the duration of a measurement) that it could as well have been over an infinite time interval. The power spectral density (PSD) then refers to the spectral energy distribution that would be found per unit time, since the total energy of such a signal over all time would generally be infinite. Summation or integration of the spectral components yields the total power (for a physical process) or variance (in a statistical process), identical to what would be obtained by integrating over the time domain, as dictated by Parseval's theorem. The spectrum of a physical process often contains essential information about the nature of . For instance, the pitch and timbre of a musical instrument are immediately determined from a spectral analysis. The color of a light source is determined by the spectrum of the electromagnetic wave's electric field as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, and generalizations based on Fourier analysis. In many cases the time domain is not specifically employed in practice, such as when a dispersive prism is used to obtain a spectrum of light in a spectrograph, or when a sound is perceived" https://en.wikipedia.org/wiki/Unified%20Diagnostic%20Services,"Unified Diagnostic Services (UDS) is a diagnostic communication protocol used in electronic control units (ECUs) within automotive electronics, which is specified in the ISO 14229-1. It is derived from ISO 14230-3 (KWP2000) and the now obsolete ISO 15765-3 (Diagnostic Communication over Controller Area Network (DoCAN)). 'Unified' in this context means that it is an international and not a company-specific standard. By now this communication protocol is used in all new ECUs made by Tier 1 suppliers of Original Equipment Manufacturer (OEM), and is incorporated into other standards, such as AUTOSAR. The ECUs in modern vehicles control nearly all functions, including electronic fuel injection (EFI), engine control, the transmission, anti-lock braking system, door locks, braking, window operation, and more. Diagnostic tools are able to contact all ECUs installed in a vehicle, which has UDS services enabled. In contrast to the CAN bus protocol, which only uses the first and second layers of the OSI model, UDS utilizes the fifth and seventh layers of the OSI model. The Service ID (SID) and the parameters associated with the services are contained in the payload of a message frame. Modern vehicles have a diagnostic interface for off-board diagnostics, which makes it possible to connect a computer (client) or diagnostics tool, which is referred to as tester, to the communication system of the vehicle. Thus, UDS requests can be sent to the controllers which must provide a response (this may be positive or negative). This makes it possible to interrogate the fault memory of the individual control units, to update them with new firmware, have low-level interaction with their hardware (e.g. to turn a specific output on or off), or to make use of special functions (referred to as routines) to attempt to understand the environment and operating conditions of an ECU to be able to diagnose faulty or otherwise undesirable behavior. Services SID (Service Identifier) See also On-" https://en.wikipedia.org/wiki/IPv4%20shared%20address%20space,"In order to ensure proper working of carrier-grade NAT (CGN), and, by doing so, alleviating the demand for the last remaining IPv4 addresses, a size IPv4 address block was assigned by Internet Assigned Numbers Authority (IANA) to be used as shared address space. This block of addresses is specifically meant to be used by Internet service providers (or ISPs) that implement carrier-grade NAT, to connect their customer-premises equipment (CPE) to their core routers. Instead of using unique addresses from the rapidly depleting pool of available globally unique IPv4 addresses, ISPs use addresses in for this purpose. Because the network between CPEs and the ISP's routers is private to each ISP, all ISPs may share this block of addresses. Background If an ISP deploys a CGN and uses private Internet address space (networks , , ) to connect their customers, there is a risk that customer equipment using an internal network in the same range will stop working. The reason is that routing will not work if the same address ranges are used on both the private and public sides of a customer’s network address translation (NAT) equipment. Normal packet flow can therefore be disrupted and the customer effectively cut off the Internet, unless the customer chooses another private address range that does not conflict with the range selected by their ISP. This prompted some ISPs to develop policy within American Registry for Internet Numbers (ARIN) to allocate new private address space for CGNs. ARIN, however, deferred to the Internet Engineering Task Force (IETF) before implementing the policy, indicating that the matter was not typical allocation but a reservation for technical purposes. In 2012, the IETF defined a Shared Address Space for use in ISP CGN deployments and NAT devices that can handle the same addresses occurring both on inbound and outbound interfaces. ARIN returned space to the IANA as needed for this allocation and ""The allocated address block is "". Transition to " https://en.wikipedia.org/wiki/HD-PLC,"HD-PLC (short for High Definition Power Line Communication) is one of the wired communication technologies. It adopts high frequency band (2 MHz~28 MHz) over mediums like powerlines, phone lines, twisted-pair, and coaxial cables. It is the IEEE 1901-based standard. Specification and features There are essentially two different types of HD-PLC: HD-PLC Complete and HD-PLC Multi-hop. They are incompatible. HD-PLC Complete This is for high speed applications such as TV, AV, and surveillance camera. The major technical features include: IEEE 1901 full compliant QoS by the priority control CSMA/CA and DVTP(Dynamic Virtual Token Passing) supported Concurrent multi-AV stream, VoIP, and file transfer and file transfer supported using IP packet classification Multi-network access at priority CSMA/CA with network synchronization HD-PLC Multi-hop This is for long-distance applications such as smart meter, building network, factory, energy management, and IoT devices. The major technical features include: ITU-T G.9905 multihop technology Common features Uplinking/downlinking through 432 of 26 MHz (between 1.8 MHz and 28 MHz) bandwidth subcarriers with Wavelet OFDM Maximum 240 Mbit/s PHY rate Multilevel modulation for each subcarrier which suits the properties of the power line transmission channel and allows for the best transmission speed Subcarrier masking with the arbitrary number which can comply with the rules in each country Forward error correction (FEC) which enables effective frame transmission Channel estimation launch system with change detector for cycle and transmission channel HD-PLC network bridging compatible to Ethernet address system Advanced encryption with 128 bit AES 4th-generation HD-PLC (HD-PLC Quatro Core technology) We now come to communication speed issues like high-definition video images (4K/8K) or in some cases multi hop technology is not enough to reach an isolated and distant PLC terminal. HD-PLC Quatro Core has " https://en.wikipedia.org/wiki/Dell%20Networking%20Operating%20System,"DNOS or Dell Networking Operating System is a network operating system running on switches from Dell Networking. It is derived from either the PowerConnect OS (DNOS 6.x) or Force10 OS/FTOS (DNOS 9.x) and will be made available for the 10G and faster Dell Networking S-series switches, the Z-series 40G core switches and DNOS6 is available for the N-series switches. Two version families The DNOS network operating system family comes in a few main versions: DNOS3 DNOS 3.x: This is a family of firmware for the campus access switches that can only be managed using a web based GUI or run as unmanaged device. DNOS6 DNOS 6.x: This is the operating system running on the Dell Networking N-series (campus) networking switches. It is the latest version of the 'PowerConnect' operating system, running on a Linux Kernel. It is available as upgrade for the PowerConnect 8100 series switches (which then become a Dell Networking N40xx switch) and it also is installed on all DN N1000, N2000 and N3000 series switches. It has a full web-based GUI together with a full CLI (command line interface) and the CLI will be very similar to the original PowerConnect CLI, though with a range of new features like PVSTP (per VLAN spanning tree), Policy Based Routing and MLAG. DNOS9 DNOS 9.x: TeUTg on NetBSD. Only the PowerConnect 8100 will be able to run on DNOS 6.x: all other PowerConnect ethernet switches will continue to run its own PowerConnect OS (on top of VxWorks) while the PowerConnect W-series run on a Dell specific version of ArubaOS. The Dell Networking S- xxxx and Z9x00 series will run on DNOS where the other Dell Networking switches will continue to run FTOS 8.x firmware. OS10 OS10 is a Linux-based open networking OS that can run on all Open Network Install Environment (ONIE) switches. As it runs directly in a Linux environment network admins can highly automate the network platform and manage the switches in a similar way as the (Linux) servers. Hardware Abstraction Layer Three " https://en.wikipedia.org/wiki/Outline%20of%20linear%20algebra,"This is an outline of topics related to linear algebra, the branch of mathematics concerning linear equations and linear maps and their representations in vector spaces and through matrices. Linear equations Linear equation System of linear equations Determinant Minor Cauchy–Binet formula Cramer's rule Gaussian elimination Gauss–Jordan elimination Overcompleteness Strassen algorithm Matrices Matrix Matrix addition Matrix multiplication Basis transformation matrix Characteristic polynomial Trace Eigenvalue, eigenvector and eigenspace Cayley–Hamilton theorem Spread of a matrix Jordan normal form Weyr canonical form Rank Matrix inversion, invertible matrix Pseudoinverse Adjugate Transpose Dot product Symmetric matrix Orthogonal matrix Skew-symmetric matrix Conjugate transpose Unitary matrix Hermitian matrix, Antihermitian matrix Positive-definite, positive-semidefinite matrix Pfaffian Projection Spectral theorem Perron–Frobenius theorem List of matrices Diagonal matrix, main diagonal Diagonalizable matrix Triangular matrix Tridiagonal matrix Block matrix Sparse matrix Hessenberg matrix Hessian matrix Vandermonde matrix Stochastic matrix Toeplitz matrix Circulant matrix Hankel matrix (0,1)-matrix Matrix decompositions Matrix decomposition Cholesky decomposition LU decomposition QR decomposition Polar decomposition Reducing subspace Spectral theorem Singular value decomposition Higher-order singular value decomposition Schur decomposition Schur complement Haynsworth inertia additivity formula Relations Matrix equivalence Matrix congruence Matrix similarity Matrix consimilarity Row equivalence Computations Elementary row operations Householder transformation Least squares, linear least squares Gram–Schmidt process Woodbury matrix identity Vector spaces Vector space Linear combination Linear span Linear independence Scalar multiplication Basis Change of basis Hamel basis Cyclic decomposition theorem Dimension theorem for vector spaces Hamel dimension Examp" https://en.wikipedia.org/wiki/Interconnect%20bottleneck,"The interconnect bottleneck comprises limits on integrated circuit (IC) performance due to connections between components instead of their internal speed. In 2006 it was predicted to be a ""looming crisis"" by 2010. Improved performance of computer systems has been achieved, in large part, by downscaling the IC minimum feature size. This allows the basic IC building block, the transistor, to operate at a higher frequency, performing more computations per second. However, downscaling of the minimum feature size also results in tighter packing of the wires on a microprocessor, which increases parasitic capacitance and signal propagation delay. Consequently, the delay due to the communication between the parts of a chip becomes comparable to the computation delay itself. This phenomenon, known as an “interconnect bottleneck”, is becoming a major problem in high-performance computer systems. This interconnect bottleneck can be solved by utilizing optical interconnects to replace the long metallic interconnects. Such hybrid optical/electronic interconnects promise better performance even with larger designs. Optics has widespread use in long-distance communications; still it has not yet been widely used in chip-to-chip or on-chip interconnections because they (in centimeter or micrometer range) are not yet industry-manufacturable owing to costlier technology and lack of fully mature technologies. As optical interconnections move from computer network applications to chip level interconnections, new requirements for high connection density and alignment reliability have become as critical for the effective utilization of these links. There are still many materials, fabrication, and packaging challenges in integrating optic and electronic technologies. See also Bus (computing) Interconnects (integrated circuits) Network-on-chip Optical network on chip Optical interconnect Photonics Von Neumann architecture" https://en.wikipedia.org/wiki/Arc%20fault,"An arc fault is a high power discharge of electricity between two or more conductors. This discharge generates heat, which can break down the wire's insulation and trigger an electrical fire. Arc faults can range in current from a few amps up to thousands of amps, and are highly variable in strength and duration. Some common causes of arc fault are loose wire connections, over heated wires, or wires pinched by furniture. Location and detection Two types of wiring protection are standard thermal breakers and arc fault circuit breakers. Thermal breakers require an overload condition long enough that a heating element in the breaker trips the breaker off. In contrast, arc fault circuit breakers use magnetic or other means to detect increases in current draw much more quickly. Without such protection, visually detecting arc faults in defective wiring is very difficult, as the arc fault occurs in a very small area. A problem with arc fault circuit breaker is they are more likely to produce false positives due to normal circuit behaviors appearing to be arc faults. For instance, lightning strikes on the outside of an aircraft mimic arc faults in their voltage and current profiles. Research has been able to largely eliminate such false positives, however, providing the ability to quickly identify and locate repairs that need to be done. In simple wiring systems visual inspection can lead to finding the fault location, but in complex wiring systems, for instance aircraft wiring, devices such as a time-domain reflectometer are helpful, even on live wires. See also Arc flash Arc-fault circuit interrupter Time-domain reflectometer" https://en.wikipedia.org/wiki/Low%20Frequency%20Analyzer%20and%20Recorder,"Two closely related terms, Low Frequency Analyzer and Recorder and Low Frequency Analysis and Recording bearing the acronym LOFAR, deal with the equipment and process respectively for presenting a visual spectrum representation of low frequency sounds in a time–frequency analysis. The process was originally applied to fixed surveillance passive antisubmarine sonar systems and later to sonobuoy and other systems. Originally the analysis was electromechanical and the display was produced on electrostatic recording paper, a Lofargram, with stronger frequencies presented as lines against background noise. The analysis migrated to digital and both analysis and display were digital after a major system consolidation into centralized processing centers during the 1990s. Both the equipment and process had specific and classified application to fixed surveillance sonar systems and was the basis for the United States Navy's ocean wide Sound Surveillance System (SOSUS) established in the early 1950s. The research and development of systems utilizing LOFAR was given the code name Project Jezebel. The installation and maintenance of SOSUS was under the unclassified code name Project Caesar. The principle was later applied to air, surface and submarine tactical sonar systems with some incorporating the name ""Jezebel"". Origin In 1949 when the US Navy approached the Committee for Undersea Warfare, an academic advisory group formed in 1946 under the National Academy of Sciences, to research antisubmarine warfare. As a result, the Navy formed a study group designated Project Hartwell under Massachusetts Institute of Technology (MIT) leadership. The Hartwell panel recommended that spending of annually to develop systems to counter the Soviet submarine threat consisting primarily of a large fleet of diesel submarines. One recommendation was a system to monitor low-frequency sound in the SOFAR channel using multiple listening sites equipped with hydrophones and a processing facility" https://en.wikipedia.org/wiki/Relay%20network,"A relay network is a broad class of network topology commonly used in wireless networks, where the source and destination are interconnected by means of some nodes. In such a network the source and destination cannot communicate to each other directly because the distance between the source and destination is greater than the transmission range of both of them, hence the need for intermediate node(s) to relay. A relay network is a type of network used to send information between two devices, for e.g. server and computer, that are too far away to send the information to each other directly. Thus the network must send or ""relay"" the information to different devices, referred to as nodes, that pass on the information to its destination. A well-known example of a relay network is the Internet. A user can view a web page from a server halfway around the world by sending and receiving the information through a series of connected nodes. In many ways, a relay network resembles a chain of people standing together. One person has a note he needs to pass to the girl at the end of the line. He is the sender, she is the recipient, and the people in between them are the messengers, or the nodes. He passes the message to the first node, or person, who passes it to the second and so on until it reaches the girl and she reads it. The people might stand in a circle, however, instead of a line. Each person is close enough to reach the person on either side of him and across from him. Together the people represent a network and several messages can now pass around or through the network in different directions at once, as opposed to the straight line that could only run messages in a specific direction. This concept, the way a network is laid out and how it shares data, is known as network topology. Relay networks can use many different topologies, from a line to a ring to a tree shape, to pass along information in the fastest and most efficient way possible. Often the relay net" https://en.wikipedia.org/wiki/%E2%88%82,"The character ∂ (Unicode: U+2202) is a stylized cursive d mainly used as a mathematical symbol, usually to denote a partial derivative such as (read as ""the partial derivative of z with respect to x""). It is also used for boundary of a set, the boundary operator in a chain complex, and the conjugate of the Dolbeault operator on smooth differential forms over a complex manifold. It should be distinguished from other similar-looking symbols such as lowercase Greek letter delta (δ) or the lowercase Latin letter eth (ð). History The symbol was originally introduced in 1770 by Nicolas de Condorcet, who used it for a partial differential, and adopted for the partial derivative by Adrien-Marie Legendre in 1786. It represents a specialized cursive type of the letter d, just as the integral sign originates as a specialized type of a long s (first used in print by Leibniz in 1686). Use of the symbol was discontinued by Legendre, but it was taken up again by Carl Gustav Jacob Jacobi in 1841, whose usage became widely adopted. Names and coding The symbol is variously referred to as ""partial"", ""curly d"", ""funky d"", ""rounded d"", ""curved d"", ""dabba"", ""number 6 mirrored"", or ""Jacobi's delta"", or as ""del"" (but this name is also used for the ""nabla"" symbol ∇). It may also be pronounced simply ""dee"", ""partial dee"", ""doh"", or ""die"". The Unicode character is accessed by HTML entities ∂ or ∂, and the equivalent LaTeX symbol (Computer Modern glyph: ) is accessed by \partial. Uses ∂ is also used to denote the following: The Jacobian . The boundary of a set in topology. The boundary operator on a chain complex in homological algebra. The boundary operator of a differential graded algebra. The conjugate of the Dolbeault operator on complex differential forms. The boundary ∂(S) of a set of vertices S in a graph is the set of edges leaving S, which defines a cut. See also d'Alembert operator Differentiable programming List of mathematical symbols Notation for diff" https://en.wikipedia.org/wiki/Table%20of%20prime%20factors,"The tables contain the prime factorization of the natural numbers from 1 to 1000. When n is a prime number, the prime factorization is just n itself, written in bold below. The number 1 is called a unit. It has no prime factors and is neither prime nor composite. Properties Many properties of a natural number n can be seen or directly computed from the prime factorization of n. The multiplicity of a prime factor p of n is the largest exponent m for which pm divides n. The tables show the multiplicity for each prime factor. If no exponent is written then the multiplicity is 1 (since p = p1). The multiplicity of a prime which does not divide n may be called 0 or may be considered undefined. Ω(n), the big Omega function, is the number of prime factors of n counted with multiplicity (so it is the sum of all prime factor multiplicities). A prime number has Ω(n) = 1. The first: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37 . There are many special types of prime numbers. A composite number has Ω(n) > 1. The first: 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21 . All numbers above 1 are either prime or composite. 1 is neither. A semiprime has Ω(n) = 2 (so it is composite). The first: 4, 6, 9, 10, 14, 15, 21, 22, 25, 26, 33, 34 . A k-almost prime (for a natural number k) has Ω(n) = k (so it is composite if k > 1). An even number has the prime factor 2. The first: 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24 . An odd number does not have the prime factor 2. The first: 1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23 . All integers are either even or odd. A square has even multiplicity for all prime factors (it is of the form a2 for some a). The first: 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144 . A cube has all multiplicities divisible by 3 (it is of the form a3 for some a). The first: 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, 1331, 1728 . A perfect power has a common divisor m > 1 for all multiplicities (it is of the form am for some a > 1 and m > 1). The first: 4, 8, 9, 16, 25, " https://en.wikipedia.org/wiki/Assembly%20language,"In computer programming, assembly language (alternatively assembler language or symbolic machine code), often referred to simply as assembly and commonly abbreviated as ASM or asm, is any low-level programming language with a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported. The first assembly code in which a language is used to represent machine code instructions is found in Kathleen and Andrew Donald Booth's 1947 work, Coding for A.R.C.. Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term ""assembler"" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean ""a program that assembles another program consisting of several sections into a single program"". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture. Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable ac" https://en.wikipedia.org/wiki/LGM-35%20Sentinel,"The LGM-35 Sentinel, also known as the Ground Based Strategic Deterrent (GBSD), is a future American land-based intercontinental ballistic missile system (ICBM) currently in the early stages of development. It is slated to replace Minuteman III missiles, currently stationed in North Dakota, Wyoming, Montana, and Nebraska from 2029 through 2075. In 2020 the Department of the Air Force awarded defense contractor Northrop Grumman a $13.3 billion sole-source contract for development of the LGM-35 after Boeing withdrew its proposal. Northrop Grumman's subcontractors on the LGM-35 include Lockheed Martin, General Dynamics, Bechtel, Honeywell, Aerojet Rocketdyne, Parsons, Textron, and others. Name According to the United States Air Force website, the L in LGM is the Department of Defense designation for silo-launched; G means surface attack; and ""M"" stands for guided missile. History In 2010, the ICBM Coalition, legislators from states that house nuclear missiles, told President Obama they would not support ratification of the New START treaty with Russia unless Obama agreed to revamp the US nuclear triad: nuclear weapons that could be launched from land, sea, and air. In a written statement, President Obama agreed to ""modernize or replace"" all three legs of the triad. A request for proposal for development and maintenance of a next-generation nuclear ICBM was made by the US Air Force Nuclear Weapons Center in July 2016. The GBSD would replace the Minuteman III, which was first deployed in 1970, in the land-based portion of the US nuclear triad. The new missiles, to be phased in over a decade from the late 2020s, are estimated over a fifty-year life cycle to cost around $264 billion. Boeing and Northrop Grumman competed for the contract. In August 2017, the Air Force awarded three-year development contracts to Boeing and Northrop Grumman for $349 million and $329 million, respectively. One of these companies was to be selected to produce a ground-based nuclear ICBM i" https://en.wikipedia.org/wiki/Mesowear,"Mesowear is a method, used in different branches and fields of biology. This method can apply to both extant and extinct animals, according to the scope of the study. Mesowear is based on studying an animal's tooth wearing fingerprint. In brief, each animal has special feeding habits, which cause unique tooth wearing. Rough feeds cause serious tooth abrasion, while smooth one triggers moderate abrasion, so browsers have teeth with moderate abrasion and grazers have teeth with rough abrasion. Scoring systems can quantify tooth abrasion observations and ease comparisons between individuals. Mesowear definition The mesowear method or tooth wear scoring method is a quick and inexpensive process of determining the lifelong diet of a taxon (grazer or browser) and was first introduced in the year 2000. The mesowear technique can be extended to extinct and also extant animals. Mesowear analyses require large sample populations (>20), which can be problematic for some localities, but the method yields an accurate depiction of an animal's average lifelong diet. Mesowear analysis is based on the physical properties of ungulate foods as reflected in the relative amounts of attritive and abrasive wear that they cause on the dental enamel of the occlusal surfaces. Mesowear was recorded by examining the buccal apices of molar tooth cusps. Apices were characterized as sharp, rounded, or blunt, and the valleys between them either high or low. The method has been developed only for selenodont and trilophodont molars, but the principle is readily extendable to other crown types. In collecting the data the teeth are inspected at close range, a hand lens will be used. Mesowear analysis is insensitive to wear stage as long as the very early and very late stages are excluded. Mesowear analysis follows standard protocols. Specimens are digitally photographed in labial view so that cusp shape and occlusal relief can be scored. this method helps zoologists and nutritionists to prepare pr" https://en.wikipedia.org/wiki/List%20of%20general%20topology%20topics,"This is a list of general topology topics. Basic concepts Topological space Topological property Open set, closed set Clopen set Closure (topology) Boundary (topology) Dense (topology) G-delta set, F-sigma set closeness (mathematics) neighbourhood (mathematics) Continuity (topology) Homeomorphism Local homeomorphism Open and closed maps Germ (mathematics) Base (topology), subbase Open cover Covering space Atlas (topology) Limits Limit point Net (topology) Filter (topology) Ultrafilter Topological properties Baire category theorem Nowhere dense Baire space Banach–Mazur game Meagre set Comeagre set Compactness and countability Compact space Relatively compact subspace Heine–Borel theorem Tychonoff's theorem Finite intersection property Compactification Measure of non-compactness Paracompact space Locally compact space Compactly generated space Axiom of countability Sequential space First-countable space Second-countable space Separable space Lindelöf space Sigma-compact space Connectedness Connected space Separation axioms T0 space T1 space Hausdorff space Completely Hausdorff space Regular space Tychonoff space Normal space Urysohn's lemma Tietze extension theorem Paracompact Separated sets Topological constructions Direct sum and the dual construction product Subspace and the dual construction quotient Topological tensor product Examples Discrete space Locally constant function Trivial topology Cofinite topology Finer topology Product topology Restricted product Quotient space Unit interval Continuum (topology) Extended real number line Long line (topology) Sierpinski space Cantor set, Cantor space, Cantor cube Space-filling curve Topologist's sine curve Uniform norm Weak topology Strong topology Hilbert cube Lower limit topology Sorgenfrey plane Real tree Compact-open topology Zariski topology Kuratowski closure axioms Unicoherent Solenoid (mathematics) Uniform spaces Uniform continuity Lipschitz continuity Uniform isomorphism Uniform property Uni" https://en.wikipedia.org/wiki/7400-series%20integrated%20circuits,"The 7400 series is a popular logic family of transistor–transistor logic (TTL) integrated circuits (ICs). In 1964, Texas Instruments introduced the SN5400 series of logic chips, in a ceramic semiconductor package. A low-cost plastic package SN7400 series was introduced in 1966 which quickly gained over 50% of the logic chip market, and eventually becoming de facto standardized electronic components. Over the decades, many generations of pin-compatible descendant families evolved to include support for low power CMOS technology, lower supply voltages, and surface mount packages. Overview The 7400 series contains hundreds of devices that provide everything from basic logic gates, flip-flops, and counters, to special purpose bus transceivers and arithmetic logic units (ALU). Specific functions are described in a list of 7400 series integrated circuits. Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number. The less-common 64 and 84 prefixes on Texas Instruments parts indicated an industrial temperature range. Since the 1970s, new product families have been released to replace the original 7400 series. More recent TTL logic families were manufactured using CMOS or BiCMOS technology rather than TTL. Today, surface-mounted CMOS versions of the 7400 series are used in various applications in electronics and for glue logic in computers and industrial electronics. The original through-hole devices in dual in-line packages (DIP/DIL) were the mainstay of the industry for many decades. They are useful for rapid breadboard-prototyping and for education and remain available from most manufacturers. The fastest types and very low voltage versions are typically surface-mount only, however. The first part number in the series, the 7400, is a 14-pin IC containing four two-input NAND gates. Each gate uses two input pins and one output pin, with the remaining two pins being po" https://en.wikipedia.org/wiki/Euler%27s%20constant,"Euler's constant (sometimes called the Euler–Mascheroni constant) is a mathematical constant, usually denoted by the lowercase Greek letter gamma (), defined as the limiting difference between the harmonic series and the natural logarithm, denoted here by : Here, represents the floor function. The numerical value of Euler's constant, to 50 decimal places, is: History The constant first appeared in a 1734 paper by the Swiss mathematician Leonhard Euler, titled De Progressionibus harmonicis observationes (Eneström Index 43). Euler used the notations and for the constant. In 1790, the Italian mathematician Lorenzo Mascheroni used the notations and for the constant. The notation appears nowhere in the writings of either Euler or Mascheroni, and was chosen at a later time perhaps because of the constant's connection to the gamma function. For example, the German mathematician Carl Anton Bretschneider used the notation in 1835 and Augustus De Morgan used it in a textbook published in parts from 1836 to 1842. Appearances Euler's constant appears, among other places, in the following (where '*' means that this entry contains an explicit equation): Expressions involving the exponential integral* The Laplace transform* of the natural logarithm The first term of the Laurent series expansion for the Riemann zeta function*, where it is the first of the Stieltjes constants* Calculations of the digamma function A product formula for the gamma function The asymptotic expansion of the gamma function for small arguments. An inequality for Euler's totient function The growth rate of the divisor function In dimensional regularization of Feynman diagrams in quantum field theory The calculation of the Meissel–Mertens constant The third of Mertens' theorems* Solution of the second kind to Bessel's equation In the regularization/renormalization of the harmonic series as a finite value The mean of the Gumbel distribution The information entropy of the Weibull and" https://en.wikipedia.org/wiki/List%20of%20combinatorial%20computational%20geometry%20topics,"List of combinatorial computational geometry topics enumerates the topics of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character. See List of numerical computational geometry topics for another flavor of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. Construction/representation Boolean operations on polygons Convex hull Hyperplane arrangement Polygon decomposition Polygon triangulation Minimal convex decomposition Minimal convex cover problem (NP-hard) Minimal rectangular decomposition Tessellation problems Shape dissection problems Straight skeleton Stabbing line problem Triangulation Delaunay triangulation Point-set triangulation Polygon triangulation Voronoi diagram Extremal shapes Minimum bounding box (Smallest enclosing box, Smallest bounding box) 2-D case: Smallest bounding rectangle (Smallest enclosing rectangle) There are two common variants of this problem. In many areas of computer graphics, the bounding box (often abbreviated to bbox) is understood to be the smallest box delimited by sides parallel to coordinate axes which encloses the objects in question. In other applications, such as packaging, the problem is to find the smallest box the object (or objects) may fit in (""packaged""). Here the box may assume an arbitrary orientation with respect to the ""packaged"" objects. Smallest bounding sphere (Smallest enclosing sphere) 2-D case: Smallest bounding circle Largest empty rectangle (Maximum empty rectangle) Largest empty sphere 2-D case: Maximum empty circle (largest empty circle) Interaction/search Collision detection Line segment intersection Point location Point in polygon Polygon intersection Range searching Orthogonal range searching Simplex range searchi" https://en.wikipedia.org/wiki/Instantaneous%20phase%20and%20frequency,"Instantaneous phase and frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase (also known as local phase or simply phase) of a complex-valued function s(t), is the real-valued function: where arg is the complex argument function. The instantaneous frequency is the temporal rate of change of the instantaneous phase. And for a real-valued function s(t), it is determined from the function's analytic representation, sa(t): where represents the Hilbert transform of s(t). When φ(t) is constrained to its principal value, either the interval or , it is called wrapped phase. Otherwise it is called unwrapped phase, which is a continuous function of argument t, assuming sa(t) is a continuous function of t. Unless otherwise indicated, the continuous form should be inferred. Examples Example 1 where ω > 0. In this simple sinusoidal example, the constant θ is also commonly referred to as phase or phase offset. φ(t) is a function of time; θ is not. In the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference (sin or cos) is specified. φ(t) is unambiguously defined. Example 2 where ω > 0. In both examples the local maxima of s(t) correspond to φ(t) = 2N for integer values of N. This has applications in the field of computer vision. Formulations Instantaneous angular frequency is defined as: and instantaneous (ordinary) frequency is defined as: where φ(t) must be the unwrapped phase; otherwise, if φ(t) is wrapped, discontinuities in φ(t) will result in Dirac delta impulses in f(t). The inverse operation, which always unwraps phase, is: This instantaneous frequency, ω(t), can be derived directly from the real and imaginary parts of sa(t), instead of the complex arg without concern of phase unwrapping. 2m1 and m2 are the integer multiples of necessary to add to unwrap the phase. At values of time, t, whe" https://en.wikipedia.org/wiki/List%20of%20CERN%20Scientific%20Committees,"Proposals for experiments are made at CERN and have to go through the correct channels in order to be approved. One of the last steps in the process is to submit the proposal to an appropriate CERN Scientific Committee. The committees will discuss the proposal and then pass on their recommendations to the Research Board (previously the Nuclear Physics Research Committee) for the final decision. Proposals approved become part of the CERN experimental programme. In 1960, John Adams, the Director General, created three committees to manage experiments for each bubble chamber experimental technique used at CERN. These replaced the previous Advisory and Bubble Chamber committees. At the end of the bubble chamber period, the system was again changed and based on machine, rather than experimental technique. The committees were changed and merged in order to accommodate to this. Since then, the committees have changed based on the creation and decommissioning of facilities and accelerators. Current committees Past committees" https://en.wikipedia.org/wiki/Y-factor,"The Y-factor method is a widely used technique for measuring the gain and noise temperature of an amplifier. It is based on the Johnson–Nyquist noise of a resistor at two different, known temperatures. Consider a microwave amplifier with a 50-ohm impedance with a 50-ohm resistor connected to the amplifier input. If the resistor is at a physical temperature TR, then the Johnson–Nyquist noise power coupled to the amplifier input is PJ = kBTRB, where kB is Boltzmann’s constant, and B is the bandwidth. The noise power at the output of the amplifier (i.e. the noise power coupled to an impedance-matched load that is connected to the amplifier output) is Pout = GkB(TR + Tamp)B, where G is the amplifier power gain, and Tamp is the amplifier noise temperature. In the Y-factor technique, Pout is measured for two different, known values of TR. Pout is then converted to an effective temperature Tout (in units of kelvin) by dividing by kB and the measurement bandwidth B. The two values of Tout are then plotted as a function of TR (also in units of kelvin), and a line is fit to these points (see figure). The slope of this line is equal to the amplifier power gain. The x intercept of the line is equal to the negative of the amplifier noise temperature −Tamp in kelvins. The amplifier noise temperature can also be determined from the y intercept, which is equal to Tamp multiplied by the gain." https://en.wikipedia.org/wiki/Spread-spectrum%20time-domain%20reflectometry,"Spread-spectrum time-domain reflectometry (SSTDR) is a measurement technique to identify faults, usually in electrical wires, by observing reflected spread spectrum signals. This type of time-domain reflectometry can be used in various high-noise and live environments. Additionally, SSTDR systems have the additional benefit of being able to precisely locate the position of the fault. Specifically, SSTDR is accurate to within a few centimeters for wires carrying 400 Hz aircraft signals as well as MIL-STD-1553 data bus signals. AN SSTDR system can be run on a live wire because the spread spectrum signals can be isolated from the system noise and activity. At the most basic level, the system works by sending spread spectrum signals down a wireline and waiting for those signals to be reflected back to the SSTDR system. The reflected signal is then correlated with a copy of the sent signal. Mathematical algorithms are applied to both the shape and timing of the signals to locate either the short or the end of an open circuit. Detecting intermittent faults in live wires Spread-spectrum time domain reflectometry is used in detecting intermittent faults in live wires. From buildings and homes to aircraft and naval ships, this technology can discover irregular shorts on live wire running 400 Hz, 115 V. For accurate location of a wiring system's fault the SSTDR associates the PN code with the signal on the line then stores the exact location of the correlation before the arc dissipates. Present SSTDR can collect a complete data set in under 5 ms. SSTDR technology allows for analysis of a network of wires. One SSTDR sensor can measure up to 4 junctions in a branched wire system. See also Spread spectrum Time-domain reflectometry" https://en.wikipedia.org/wiki/Mandelstam%20variables,"In theoretical physics, the Mandelstam variables are numerical quantities that encode the energy, momentum, and angles of particles in a scattering process in a Lorentz-invariant fashion. They are used for scattering processes of two particles to two particles. The Mandelstam variables were first introduced by physicist Stanley Mandelstam in 1958. If the Minkowski metric is chosen to be , the Mandelstam variables are then defined by , where p1 and p2 are the four-momenta of the incoming particles and p3 and p4 are the four-momenta of the outgoing particles. is also known as the square of the center-of-mass energy (invariant mass) and as the square of the four-momentum transfer. Feynman diagrams The letters s,t,u are also used in the terms s-channel (timelike channel), t-channel, and u-channel (both spacelike channels). These channels represent different Feynman diagrams or different possible scattering events where the interaction involves the exchange of an intermediate particle whose squared four-momentum equals s,t,u, respectively. {|cellpadding=""10"" | | | |- |align=""center""|s-channel |align=""center""|t-channel |align=""center""|u-channel |} For example, the s-channel corresponds to the particles 1,2 joining into an intermediate particle that eventually splits into 3,4: The t-channel represents the process in which the particle 1 emits the intermediate particle and becomes the final particle 3, while the particle 2 absorbs the intermediate particle and becomes 4. The u-channel is the t-channel with the role of the particles 3,4 interchanged. When evaluating a Feynman amplitude one often finds scalar products of the external four momenta. One can use the Mandelstam variables to simplify these: Where is the mass of the particle with corresponding momentum . Sum Note that where mi is the mass of particle i. To prove this, we need to use two facts: The square of a particle's four momentum is the square of its mass, And conservation of four-momentum, " https://en.wikipedia.org/wiki/Digital%20signal%20controller,"A digital signal controller (DSC) is a hybrid of microcontrollers and digital signal processors (DSPs). Like microcontrollers, DSCs have fast interrupt responses, offer control-oriented peripherals like PWMs and watchdog timers, and are usually programmed using the C programming language, although they can be programmed using the device's native assembly language. On the DSP side, they incorporate features found on most DSPs such as single-cycle multiply–accumulate (MAC) units, barrel shifters, and large accumulators. Not all vendors have adopted the term DSC. The term was first introduced by Microchip Technology in 2002 with the launch of their 6000 series DSCs and subsequently adopted by most, but not all DSC vendors. For example, Infineon and Renesas refer to their DSCs as microcontrollers. DSCs are used in a wide range of applications, but the majority go into motor control, power conversion, and sensor processing applications. Currently, DSCs are being marketed as green technologies for their potential to reduce power consumption in electric motors and power supplies. In order of market share, the top three DSC vendors are Texas Instruments, Freescale, and Microchip Technology, according to market research firm Forward Concepts (2007). These three companies dominate the DSC market, with other vendors such as Infineon and Renesas taking a smaller slice of the pie. DSC chips NOTE: Data is from 2012 (Microchip and TI) and table currently only includes offering from the top 3 DSC vendors. DSC software DSCs, like microcontrollers and DSPs, require software support. There are a growing number of software packages that offer the features required by both DSP applications and microcontroller applications. With a broader set of requirements, software solutions are more rare. They require: development tools, DSP libraries, optimization for DSP processing, fast interrupt handling, multi-threading, and a tiny footprint." https://en.wikipedia.org/wiki/System%20administrator,"A system administrator, sysadmin, or admin is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multi-user computers, such as servers. The system administrator seeks to ensure that the uptime, performance, resources, and security of the computers they manage meet the needs of the users, without exceeding a set budget when doing so. To meet these needs, a system administrator may acquire, install, or upgrade computer components and software; provide routine automation; maintain security policies; troubleshoot; train or supervise staff; or offer technical support for projects. Related fields Many organizations staff offer jobs related to system administration. In a larger company, these may all be separate positions within a computer support or Information Services (IS) department. In a smaller group they may be shared by a few sysadmins, or even a single person. A database administrator (DBA) maintains a database system, and is responsible for the integrity of the data and the efficiency and performance of the system. A network administrator maintains network infrastructure such as switches and routers, and diagnoses problems with these or with the behavior of network-attached computers. A security administrator is a specialist in computer and network security, including the administration of security devices such as firewalls, as well as consulting on general security measures. A web administrator maintains web server services (such as Apache or IIS) that allow for internal or external access to web sites. Tasks include managing multiple sites, administering security, and configuring necessary components and software. Responsibilities may also include software change management. A computer operator performs routine maintenance and upkeep, such as changing backup tapes or replacing failed drives in a redundant array of independent disks (RAID). Such tasks usually require physical presence in the " https://en.wikipedia.org/wiki/Pairing%20%28computing%29,"Pairing, sometimes known as bonding, is a process used in computer networking that helps set up an initial linkage between computing devices to allow communications between them. The most common example is used in Bluetooth, where the pairing process is used to link devices like a Bluetooth headset with a mobile phone. Computer networking Computing terminology 2 (number)" https://en.wikipedia.org/wiki/Time-driven%20switching,"In telecommunication and computer networking, time-driven switching (TDS) is a node by node time variant implementation of circuit switching, where the propagating datagram is shorter in space than the distance between source and destination. With TDS it is no longer necessary to own a complete circuit between source and destination, but only the fraction of circuit where the propagating datagram is temporarily located. TDS adds flexibility and capacity to circuit-switched networks but requires precise synchronization among nodes and propagating datagrams. Datagrams are formatted according to schedules that depend on quality of service and availability of switching nodes and physical links. In respect to circuit switching, the added time dimension introduces additional complexity to network management. Like circuit switching, TDS operates without buffers and header processing according to the pipeline forwarding principle; therefore an all optical implementation with optical fibers and optical switches is possible with low cost. The TDS concept itself pervades and is applicable with advantage to existing data switching technologies, including packet switching, where packets, or sets of packets become the datagrams that are routed through the network. TDS has been invented in 2002 by Prof. Mario Baldi and Prof. Yoram Ofek of Synchrodyne Networks that is the assignee of several patents issued by both the United States Patent and Trademark Office and the European Patent Office." https://en.wikipedia.org/wiki/Four-vector,"In special relativity, a four-vector (or 4-vector) is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a representation space of the standard representation of the Lorentz group, the (,) representation. It differs from a Euclidean vector in how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which include spatial rotations and boosts (a change by a constant velocity to another inertial reference frame). Four-vectors describe, for instance, position in spacetime modeled as Minkowski space, a particle's four-momentum , the amplitude of the electromagnetic four-potential at a point in spacetime, and the elements of the subspace spanned by the gamma matrices inside the Dirac algebra. The Lorentz group may be represented by 4×4 matrices . The action of a Lorentz transformation on a general contravariant four-vector (like the examples above), regarded as a column vector with Cartesian coordinates with respect to an inertial frame in the entries, is given by (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the corresponding covariant vectors , and . These transform according to the rule where denotes the matrix transpose. This rule is different from the above rule. It corresponds to the dual representation of the standard representation. However, for the Lorentz group the dual of any representation is equivalent to the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that is not a four-vector, see bispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by " https://en.wikipedia.org/wiki/Landscape%20limnology,"Landscape limnology is the spatially explicit study of lakes, streams, and wetlands as they interact with freshwater, terrestrial, and human landscapes to determine the effects of pattern on ecosystem processes across temporal and spatial scales. Limnology is the study of inland water bodies inclusive of rivers, lakes, and wetlands; landscape limnology seeks to integrate all of these ecosystem types. The terrestrial component represents spatial hierarchies of landscape features that influence which materials, whether solutes or organisms, are transported to aquatic systems; aquatic connections represent how these materials are transported; and human activities reflect features that influence how these materials are transported as well as their quantity and temporal dynamics. Foundation The core principles or themes of landscape ecology provide the foundation for landscape limnology. These ideas can be synthesized into a set of four landscape ecology themes that are broadly applicable to any aquatic ecosystem type, and that consider the unique features of such ecosystems. A landscape limnology framework begins with the premise of Thienemann (1925). Wiens (2002): freshwater ecosystems can be considered patches. As such, the location of these patches and their placement relative to other elements of the landscape is important to the ecosystems and their processes. Therefore, the four main themes of landscape limnology are: Patch characteristics: The characteristics of a freshwater ecosystem include its physical morphometry, chemical, and biological features, as well as its boundaries. These boundaries are often more easily defined for aquatic ecosystems than for terrestrial ecosystems (e.g., shoreline, riparian zones, and emergent vegetation zone) and are often a focal-point for important ecosystem processes linking terrestrial and aquatic components. Patch context: The freshwater ecosystem is embedded in a complex terrestrial mosaic (e.g., soils, geology, and " https://en.wikipedia.org/wiki/Bracket,"A bracket, as used in British English, is either of two tall fore- or back-facing punctuation marks commonly used to isolate a segment of text or data from its surroundings. Typically deployed in symmetric pairs, an individual bracket may be identified as a 'left' or 'right' bracket or, alternatively, an ""opening bracket"" or ""closing bracket"", respectively, depending on the directionality of the context. There are four primary types of brackets. In British usage they are known as round brackets (or simply brackets), square brackets, curly brackets, and angle brackets; in American usage they are respectively known as parentheses, brackets, braces, and chevrons. There are also various less common symbols considered brackets. Various forms of brackets are used in mathematics, with specific mathematical meanings, often for denoting specific mathematical functions and subformulas. History Angle brackets or chevrons ⟨ ⟩ were the earliest type of bracket to appear in written English. Erasmus coined the term to refer to the round brackets or parentheses () recalling the shape of the crescent moon (). Most typewriters only had the left and right parentheses. Square brackets appeared with some teleprinters. Braces (curly brackets) first became part of a character set with the 8-bit code of the IBM 7030 Stretch. In 1961, ASCII contained parenthesis, square, and curly brackets, and also less-than and greater-than signs that could be used as angle brackets. Typography In English, typographers mostly prefer not to set brackets in italics, even when the enclosed text is italic. However, in other languages like German, if brackets enclose text in italics, they are usually also set in italics. Parentheses or (round) brackets ( and ) are called parentheses (singular parenthesis ) in American English, and ""brackets"" informally in the UK, India, Ireland, Canada, the West Indies, New Zealand, South Africa, and Australia; they are also known as ""round brackets"", ""parens"" , " https://en.wikipedia.org/wiki/Positional%20notation,"Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string. The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits. Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers. The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe. History Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous." https://en.wikipedia.org/wiki/Branch%20Queue,"In Computer Architecture, While Branch predictions Branch queue takes place. When Branch Predictor predicts if the branch is taken or not, Branch queue stores the predictions that to be used later. Branch queue consists 2 values only. Taken or Not Taken. Branch queue helps other algorithms to increase parallelism and optimization. It is not software implemented or Hardware one, It falls under hardware software co-design." https://en.wikipedia.org/wiki/Lanthanide%20probes,"Lanthanide probes are a non-invasive analytical tool commonly used for biological and chemical applications. Lanthanides are metal ions which have their 4f energy level filled and generally refer to elements cerium to lutetium in the periodic table. The fluorescence of lanthanide salts is weak because the energy absorption of the metallic ion is low; hence chelated complexes of lanthanides are most commonly used. The term chelate derives from the Greek word for “claw,” and is applied to name ligands, which attach to a metal ion with two or more donor atoms through dative bonds. The fluorescence is most intense when the metal ion has the oxidation state of 3+. Not all lanthanide metals can be used and the most common are: Sm(III), Eu(III), Tb(III), and Dy(III). History It has been known since the early 1930s that the salts of certain lanthanides are fluorescent. The reaction of lanthanide salts with nucleic acids was discussed in a number of publications during the 1930s and the 1940s where lanthanum-containing reagents were employed for the fixation of nucleic acid structures. In 1942 complexes of europium, terbium, and samarium were discovered to exhibit unusual luminescence properties when excited by UV light. However, the first staining of biological cells with lanthanides occurred twenty years later when bacterial smears of E. coli were treated with aqueous solutions of a europium complex, which under mercury lamp illumination appeared as bright red spots. Attention to lanthanide probes increased greatly in the mid-1970s when Finnish researchers proposed Eu(III), Sm(III), Tb(III), and Dy(III) polyaminocarboxylates as luminescent sensors in time-resolved luminescent (TRL) immunoassays. Optimization of analytical methods from the 1970s onward for lanthanide chelates and time-resolved luminescence microscopy (TRLM) resulted in the use of lanthanide probes in many scientific, medical and commercial fields. Techniques There are two main assaying techniques: heter" https://en.wikipedia.org/wiki/Homogeneity%20%28physics%29,"In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). Mathematically, homogeneity has the connotation of invariance, as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. ""The state of having identical cumulative distribution function or values"". Context The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as ""constituents"" of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy. In the previous example, a composite material may not be isotropic. In another context, a material is not homogeneous in so far as it is composed of atoms and molecules. However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material. A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum; and homogeneity in time implies co" https://en.wikipedia.org/wiki/Flash%20memory%20emulator,"A flash emulator or flash memory emulator is a tool that is used to temporarily replace flash memory or ROM chips in an embedded device for the purpose of debugging embedded software. Such tools contain Dual-ported RAM, one port of which is connected to a target system (i.e. system, that is being debugged), and second is connected to a host (i.e. PC, which runs debugger). This allows the programmer to change executable code while it is running, set break points, and use other advanced debugging techniques on an embedded system, where such operations would not be possible otherwise. This type of tool appeared in 1980s-1990s, when most embedded systems were using discrete ROM (or later flash memory) chip, containing executable code. This allowed for easy replacing of ROM/flash chip with emulator. Together with excellent productivity of this tool this had driven an almost universal use of it among embedded developers. Later, when most embedded systems started to include both processor and flash on a single chip for cost and IP protection reasons, thus making external flash emulator tool impossible, search for a replacement tool started. And as often happens when a direct replacement is being searched for, many replacement techniques contain words ""flash emulation"" in them, for example, TI's ""Flash Emulation Tool"" debugging interface (FET) for its MSP430 chips, or more generic in-circuit emulators, even though none of two above had anything to do with flash or emulation as it is. Flash emulator could also be retrofitted to an embedded system to facilitate reverse engineering. For example, that was main hardware instrument in reverse engineering Wii gaming console bootloader. See also In-circuit emulator" https://en.wikipedia.org/wiki/Molybdenum%20in%20biology,"Molybdenum is an essential element in most organisms. It is most notably present in nitrogenase which is an essential part of nitrogen fixation. Mo-containing enzymes Molybdenum is an essential element in most organisms; a 2008 research paper speculated that a scarcity of molybdenum in the Earth's early oceans may have strongly influenced the evolution of eukaryotic life (which includes all plants and animals). At least 50 molybdenum-containing enzymes have been identified, mostly in bacteria. Those enzymes include aldehyde oxidase, sulfite oxidase and xanthine oxidase. With one exception, Mo in proteins is bound by molybdopterin to give the molybdenum cofactor. The only known exception is nitrogenase, which uses the FeMoco cofactor, which has the formula Fe7MoS9C. In terms of function, molybdoenzymes catalyze the oxidation and sometimes reduction of certain small molecules in the process of regulating nitrogen, sulfur, and carbon. In some animals, and in humans, the oxidation of xanthine to uric acid, a process of purine catabolism, is catalyzed by xanthine oxidase, a molybdenum-containing enzyme. The activity of xanthine oxidase is directly proportional to the amount of molybdenum in the body. An extremely high concentration of molybdenum reverses the trend and can inhibit purine catabolism and other processes. Molybdenum concentration also affects protein synthesis, metabolism, and growth. Mo is a component in most nitrogenases. Among molybdoenzymes, nitrogenases are unique in lacking the molybdopterin. Nitrogenases catalyze the production of ammonia from atmospheric nitrogen: The biosynthesis of the FeMoco active site is highly complex. Molybdate is transported in the body as MoO42−. Human metabolism and deficiency Molybdenum is an essential trace dietary element. Four mammalian Mo-dependent enzymes are known, all of them harboring a pterin-based molybdenum cofactor (Moco) in their active site: sulfite oxidase, xanthine oxidoreductase, aldehyde oxida" https://en.wikipedia.org/wiki/Yupana,"A yupana (from Quechua: yupay 'count') is a counting board used to perform arithmetic operations, dating back to the time of the Incas. Very little documentation exists concerning its precise physical form or how it was used. Types The term yupana refers to two distinct classes of objects: Table Yupana (or archaeological yupana): a system of geometric boxes of different sizes and materials. The first example of this type was found in 1869 in the Ecuadorian province of Azuay and prompted searches for more of these objects. All examples of the archaeological yupana vary greatly from each other. Some archaeological yupanas found in Manchán (an archaeological site in Casma) and Huacones-Vilcahuasi (in Cañete) were embedded into the floor. Poma de Ayala Yupana: a picture on page 360 of El primer nueva corónica y buen gobierno, written by the Amerindian chronicler Felipe Guaman Poma de Ayala shows a 5x4 chessboard (shown right). The chessboard, though resembling a table yupana, differs from this style in most notably in each of its rectangular trays have the same dimensions, while table yupanas have trays of other polygonal shapes of differing sizes. Although very different from each other, most scholars who have dealt with table yupanas have extended reasoning and theories to the Poma de Ayala yupana and vice versa, perhaps in an attempt to find a unifying thread or a common method of creation. For example, the Nueva coronica (New Chronicle) discovered in 1916 in the library of Copenhagen contained evidence that a portion of the studies on the Poma de Ayala yupana were based on previous studies and theories regarding table yupanas. History Several chroniclers of the Indies described, in brief, this Incan abacus and its operation. Felipe Guaman Poma de Ayala The first was Guaman Poma de Ayala around the year 1615 who wrote: In addition to providing this brief description, Poma de Ayala drew a picture of the yupana: a board of five rows and four columns with e" https://en.wikipedia.org/wiki/Chamfer%20%28geometry%29,"In geometry, chamfering or edge-truncation is a topological operator that modifies one polyhedron into another. It is similar to expansion, moving faces apart and outward, but also maintains the original vertices. For polyhedra, this operation adds a new hexagonal face in place of each original edge. In Conway polyhedron notation it is represented by the letter . A polyhedron with edges will have a chamfered form containing new vertices, new edges, and new hexagonal faces. Chamfered Platonic solids In the chapters below the chamfers of the five Platonic solids are described in detail. Each is shown in a version with edges of equal length and in a canonical version where all edges touch the same midsphere. (They only look noticeably different for solids containing triangles.) The shown duals are dual to the canonical versions. Chamfered tetrahedron The chamfered tetrahedron (or alternate truncated cube) is a convex polyhedron constructed as an alternately truncated cube or chamfer operation on a tetrahedron, replacing its 6 edges with hexagons. It is the Goldberg polyhedron GIII(2,0), containing triangular and hexagonal faces. Chamfered cube The chamfered cube is a convex polyhedron with 32 vertices, 48 edges, and 18 faces: 12 hexagons and 6 squares. It is constructed as a chamfer of a cube. The squares are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the tetrakis cuboctahedron. It is also inaccurately called a truncated rhombic dodecahedron, although that name rather suggests a rhombicuboctahedron. It can more accurately be called a tetratruncated rhombic dodecahedron because only the order-4 vertices are truncated. The hexagonal faces are equilateral but not regular. They are formed by a truncated rhombus, have 2 internal angles of about 109.47°, or , and 4 internal angles of about 125.26°, while a regular hexagon would have all 120° angles. Because all its faces have an even number of sides with " https://en.wikipedia.org/wiki/Time-varied%20gain,"Time varied gain (TVG) is signal compensation that is applied by the receiver electronics through analog or digital signal processing. The desired result is that targets of the same size produce echoes of the same size, regardless of target range. See also Automatic gain control" https://en.wikipedia.org/wiki/SOS%20chromotest,"The SOS chromotest is a biological assay to assess the genotoxic potential of chemical compounds. The test is a colorimetric assay which measures the expression of genes induced by genotoxic agents in Escherichia coli, by means of a fusion with the structural gene for β-galactosidase. The test is performed over a few hours in columns of a 96-well microplate with increasing concentrations of test samples. This test was developed as a practical complement or alternative to the traditional Ames test assay for genotoxicity, which involves growing bacteria on agar plates and comparing natural mutation rates to mutation rates of bacteria exposed to potentially mutagenic compounds or samples. The SOS chromotest is comparable in accuracy and sensitivity to established methods such as the Ames test and is a useful tool to screen genotoxic compounds, which could prove carcinogenic in humans, in order to single out chemicals for further in-depth analysis. As with other bacterial gentoxicity and mutagenicity assays, compounds requiring metabolic activation for activity can be investigated with the addition of S9 microsomal rat liver extract. Mechanism The SOS response plays a central role in the response of E. coli to genotoxic compounds because it responds to a wide array of chemical agents. Triggering of this system can and has been used as an early sign of DNA damage. Two genes play a key role in the SOS response: lexA encodes a repressor for all the genes in the system, and recA encodes a protein able to cleave the LexA repressor upon activation by an SOS inducing signal (caused in this case by the presence of a genotoxic compound). Although the exact mechanism of the SOS response is still unknown, it is induced when DNA lesions perturb or stop DNA replication. . Various end-points are possible indicators of the triggering of the SOS system; activation of the RecA protein, cleavage of the LexA repressor, expression of any of the SOS genes, etc. One of the simplest assays" https://en.wikipedia.org/wiki/Echo%20removal,"Echo removal is the process of removing echo and reverberation artifacts from audio signals. The reverberation is typically modeled as the convolution of a (sometimes time-varying) impulse response with a hypothetical clean input signal, where both the clean input signal (which is to be recovered) and the impulse response are unknown. This is an example of an inverse problem. In almost all cases, there is insufficient information in the input signal to uniquely determine a plausible original image, making it an ill-posed problem. This is generally solved by the use of a regularization term to attempt to eliminate implausible solutions. This problem is analogous to deblurring in the image processing domain. See also Echo suppression and cancellation Digital room correction Noise reduction Linear prediction coder Signal processing" https://en.wikipedia.org/wiki/Hilbert%20spectrum,"The Hilbert spectrum (sometimes referred to as the Hilbert amplitude spectrum), named after David Hilbert, is a statistical tool that can help in distinguishing among a mixture of moving signals. The spectrum itself is decomposed into its component sources using independent component analysis. The separation of the combined effects of unidentified sources (blind signal separation) has applications in climatology, seismology, and biomedical imaging. Conceptual summary The Hilbert spectrum is computed by way of a 2-step process consisting of: Preprocessing a signal separate it into intrinsic mode functions using a mathematical decomposition such as singular value decomposition (SVD) or empirical mode decomposition (EMD); Applying the Hilbert transform to the results of the above step to obtain the instantaneous frequency spectrum of each of the components. The Hilbert transform defines the imaginary part of the function to make it an analytic function (sometimes referred to as a progressive function), i.e. a function whose signal strength is zero for all frequency components less than zero. With the Hilbert transform, the singular vectors give instantaneous frequencies that are functions of time, so that the result is an energy distribution over time and frequency. The result is an ability to capture time-frequency localization to make the concept of instantaneous frequency and time relevant (the concept of instantaneous frequency is otherwise abstract or difficult to define for all but monocomponent signals). Definition For a given signal decomposed (with for example Empirical Mode Decomposition) to where is the number of intrinsic mode functions that consists of and The instantaneous angle frequency is then defined as From this, we can define the Hilbert Spectrum for as The Hilbert Spectrum of is then given by Marginal Hilbert Spectrum A two dimensional representation of a Hilbert Spectrum, called Marginal Hilbert Spectrum, is defined as where " https://en.wikipedia.org/wiki/GP5%20chip,"The GP5 is a co-processor accelerator built to accelerate discrete belief propagation on factor graphs and other large-scale tensor product operations for machine learning. It is related to, and anticipated by a number of years, the Google Tensor Processing Unit It is designed to run as a co-processor with another controller (such as a CPU (x86) or an ARM/MIPS/Tensilica core). It was developed as the culmination of DARPA's Analog Logic program The GP5 has a fairly exotic architecture, resembling neither a GPU nor a DSP, and leverages massive fine-grained and coarse-grained parallelism. It is deeply pipelined. The different algorithmic tasks involved in performing belief propagation updates are performed by independent, heterogeneous compute units. The performance of the chip is governed by the structure of the machine learning workload being evaluated. In typical cases, the GP5 is roughly 100 times faster and 100 times more energy efficient than a single core of a modern core i7 performing a comparable task. It is roughly 10 times faster and 1000 times more energy efficient than a state-of-the art GPU. It is roughly 1000 times faster and 10 times more energy efficient than a state-of-the-art ARM processor. It was benchmarked on typical machine learning and inference workloads that included protein side-chain folding, turbo error correction decoding, stereo vision, signal noise reduction, and others. Analog Devices, Inc. acquired the intellectual property for the GP5 when it acquired Lyric Semiconductor, Inc. in 2011." https://en.wikipedia.org/wiki/Hot-carrier%20injection,"Hot carrier injection (HCI) is a phenomenon in solid-state electronic devices where an electron or a “hole” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term ""hot"" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor, the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. Physics The term “hot carrier injection” usually refers to the effect in MOSFETs, where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric, which usually is made of silicon dioxide (SiO2). To become “hot” and enter the conduction band of SiO2, an electron must gain a kinetic energy of ~3.2 eV. For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term ""hot electron"" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal). The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. More broadly, the term describes electron distributions describable by the Fermi function, but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. Hot electrons can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increa" https://en.wikipedia.org/wiki/Multiseat%20configuration,"A multiseat, multi-station or multiterminal system is a single computer which supports multiple independent local users at the same time. A ""seat"" consists of all hardware devices assigned to a specific workplace at which one user sits at and interacts with the computer. It consists of at least one graphics device (graphics card or just an output (e.g. HDMI/VGA/DisplayPort port) and the attached monitor/video projector) for the output and a keyboard and a mouse for the input. It can also include video cameras, sound cards and more. Motivation Since the 1960s computers have been shared between users. Especially in the early days of computing when computers were extremely expensive the usual paradigm was a central mainframe computer connected to numerous terminals. With the advent of personal computing this paradigm has been largely replaced by personal computers (or one computer per user). Multiseat setups are a return to this multiuser paradigm but based around a PC which supports a number of zero-clients usually consisting of a terminal per user (screen, keyboard, mouse). In some situations a multiseat setup is more cost-effective because it is not necessary to buy separate motherboards, microprocessors, RAM, hard disks and other components for each user. For example, buying one high speed CPU, usually costs less than buying several slower CPUs. History In the 1970s, it was very commonplace to connect multiple computer terminals to a single mainframe computer, even graphical terminals. Early terminals were connected with RS-232 type serial connections, either directly, or through modems. With the advent of Internet Protocol based networking, it became possible for multiple users to log into a host using telnet or – for a graphic environment – an X Window System ""server"". These systems would retain a physically secure ""root console"" for system administration and direct access to the host machine. Support for multiple consoles in a PC running the X interface w" https://en.wikipedia.org/wiki/Molecular%20risk%20assessment,"Molecular risk assessment is a procedure in which biomarkers (for example, biological molecules or changes in tumor cell DNA) are used to estimate a person's risk for developing cancer. Specific biomarkers may be linked to particular types of cancer. Sources External links Molecular risk assessment entry in the public domain NCI Dictionary of Cancer Terms Biological techniques and tools Cancer screening" https://en.wikipedia.org/wiki/Copeland%E2%80%93Erd%C5%91s%20constant,"The Copeland–Erdős constant is the concatenation of ""0."" with the base 10 representations of the prime numbers in order. Its value, using the modern definition of prime, is approximately 0.235711131719232931374143… . The constant is irrational; this can be proven with Dirichlet's theorem on arithmetic progressions or Bertrand's postulate (Hardy and Wright, p. 113) or Ramare's theorem that every even integer is a sum of at most six primes. It also follows directly from its normality (see below). By a similar argument, any constant created by concatenating ""0."" with all primes in an arithmetic progression dn + a, where a is coprime to d and to 10, will be irrational; for example, primes of the form 4n + 1 or 8n + 1. By Dirichlet's theorem, the arithmetic progression dn · 10m + a contains primes for all m, and those primes are also in cd + a, so the concatenated primes contain arbitrarily long sequences of the digit zero. In base 10, the constant is a normal number, a fact proven by Arthur Herbert Copeland and Paul Erdős in 1946 (hence the name of the constant). The constant is given by where pn is the nth prime number. Its continued fraction is [0; 4, 4, 8, 16, 18, 5, 1, …] (). Related constants Copeland and Erdős's proof that their constant is normal relies only on the fact that is strictly increasing and , where is the nth prime number. More generally, if is any strictly increasing sequence of natural numbers such that and is any natural number greater than or equal to 2, then the constant obtained by concatenating ""0."" with the base- representations of the 's is normal in base . For example, the sequence satisfies these conditions, so the constant 0.003712192634435363748597110122136… is normal in base 10, and 0.003101525354661104…7 is normal in base 7. In any given base b the number which can be written in base b as 0.0110101000101000101…b where the nth digit is 1 if and only if n is prime, is irrational. See also Smarandache–Wellin numbers: " https://en.wikipedia.org/wiki/Connectedness,"In mathematics, connectedness is used to refer to various properties meaning, in some sense, ""all one piece"". When a mathematical object has such a property, we say it is connected; otherwise it is disconnected. When a disconnected object can be split naturally into connected pieces, each piece is usually called a component (or connected component). Connectedness in topology A topological space is said to be connected if it is not the union of two disjoint nonempty open sets. A set is open if it contains no point lying on its boundary; thus, in an informal, intuitive sense, the fact that a space can be partitioned into disjoint open sets suggests that the boundary between the two sets is not part of the space, and thus splits it into two separate pieces. Other notions of connectedness Fields of mathematics are typically concerned with special kinds of objects. Often such an object is said to be connected if, when it is considered as a topological space, it is a connected space. Thus, manifolds, Lie groups, and graphs are all called connected if they are connected as topological spaces, and their components are the topological components. Sometimes it is convenient to restate the definition of connectedness in such fields. For example, a graph is said to be connected if each pair of vertices in the graph is joined by a path. This definition is equivalent to the topological one, as applied to graphs, but it is easier to deal with in the context of graph theory. Graph theory also offers a context-free measure of connectedness, called the clustering coefficient. Other fields of mathematics are concerned with objects that are rarely considered as topological spaces. Nonetheless, definitions of connectedness often reflect the topological meaning in some way. For example, in category theory, a category is said to be connected if each pair of objects in it is joined by a sequence of morphisms. Thus, a category is connected if it is, intuitively, all one piece. There ma" https://en.wikipedia.org/wiki/Pulsed-field%20gel%20electrophoresis,"Pulsed-field gel electrophoresis (PFGE) is a technique used for the separation of large DNA molecules by applying to a gel matrix an electric field that periodically changes direction. Pulsed-field gel electrophoresis is a method used to separate large segments of DNA using an alternating and cross field. In a uniform magnetic field, components larger than 50kb move through the gel in a zigzag pattern, allowing for more effective separation of DNA molecules. This method is commonly used in microbiology for typing bacteria and is a valuable tool for epidemiological studies and gene mapping in microbes and mammalian cells. It also played a role in the development of large-insert cloning systems such as bacterial and yeast artificial chromosomes. PFGE can be used to determine the genetic similarity between bacteria, as close and similar species will have similar profiles while dissimilar ones will have different profiles. This feature is useful in identifying the prevalent agent of a disease. Additionally, it can be used to monitor and evaluate micro-organisms in clinical samples, soil and water. It is also considered a reliable and standard method in vaccine preparation. In recent years, PFGE has been widely used as a powerful tool for controlling, preventing and monitoring diseases in different populations Discovery The discovery of PFGE can be traced back to the late 1970s and early 1980s. One of the earliest references to the use of PFGE for DNA analysis is a 1977 paper by Dr. David Burke and colleagues at the University of Colorado, where they described a method of separating DNA molecules based on their size using conventional gel electrophoresis. The first reference to the use of the term ""pulsed-field gel electrophoresis"" appears in a 1983 paper by Dr. Richard L. Sweeley and colleagues at the DuPont Company, where they described a method of separating large DNA molecules (over 50 kb) by applying a series of alternating electric fields to a gel matrix. In the" https://en.wikipedia.org/wiki/Circumscriptional%20name,"In biological classification, circumscriptional names are taxon names that are not ruled by ICZN and are defined by the particular set of members included. Circumscriptional names are used mainly for taxa above family-group level (e. g. order or class), but can be also used for taxa of any ranks, as well as for rank-less taxa. Non-typified names other than those of the genus- or species-group constitute the majority of generally accepted names of taxa higher than superfamily. The ICZN regulates names of taxa up to family group rank (i. e. superfamily). There are no generally accepted rules of naming higher taxa (orders, classes, phyla, etc.). Under the approach of circumscription-based (circumscriptional) nomenclatures, a circumscriptional name is associated with a certain circumscription of a taxon without regard of its rank or position. Some authors advocate introducing a mandatory standardized typified nomenclature of higher taxa. They suggest all names of higher taxa to be derived in the same manner as family-group names, i.e. by modifying names of type genera with endings to reflect the rank. There is no consensus on what such higher rank endings should be. A number of established practices exist as to the use of typified names of higher taxa, depending on animal group. See also Descriptive botanical name, optional forms still used in botany for ranks above family and for a few family names" https://en.wikipedia.org/wiki/Biositemap,"A Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers. The Biositemap enables web browsers, crawlers and robots to easily access and process the information to use in other systems, media and computational formats. Biositemaps protocols provide clues for the Biositemap web harvesters, allowing them to find resources and content across the whole interlink of the Biositemap system. This means that human or machine users can access any relevant information on any topic across all organisations throughout the Biositemap system and bring it to their own systems for assimilation or analysis. File framework The information is normally stored in a biositemap.rdf or biositemap.xml file which contains lists of information about the data, software, tools material and services provided or held by that organisation. Information is presented in metafields and can be created online through sites such as the biositemaps online editor. The information is a blend of sitemaps and RSS feeds and is created using the Information Model (IM) and Biomedical Resource Ontology (BRO). The IM is responsible for defining the data held in the metafields and the BRO controls the terminology of the data held in the resource_type field. The BRO is critical in aiding the interactivity of both the other organisations and third parties to search and refine those searches. Data formats The Biositemaps Protocol allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web services). The biositemap concept is based on ideas from Efficient, Automated Web Resource Harvesting an" https://en.wikipedia.org/wiki/Cantor%27s%20diagonal%20argument,"In set theory, Cantor's diagonal argument, also called the diagonalisation argument, the diagonal slash argument, the anti-diagonal argument, the diagonal method, and Cantor's diagonalization proof, was published in 1891 by Georg Cantor as a mathematical proof that there are infinite sets which cannot be put into one-to-one correspondence with the infinite set of natural numbers. Such sets are now known as uncountable sets, and the size of infinite sets is now treated by the theory of cardinal numbers which Cantor began. The diagonal argument was not Cantor's first proof of the uncountability of the real numbers, which appeared in 1874. However, it demonstrates a general technique that has since been used in a wide range of proofs, including the first of Gödel's incompleteness theorems and Turing's answer to the Entscheidungsproblem. Diagonalization arguments are often also the source of contradictions like Russell's paradox and Richard's paradox. Uncountable set Cantor considered the set T of all infinite sequences of binary digits (i.e. each digit is zero or one). He begins with a constructive proof of the following lemma: If s1, s2, ... , sn, ... is any enumeration of elements from T, then an element s of T can be constructed that doesn't correspond to any sn in the enumeration. The proof starts with an enumeration of elements from T, for example {| |- | s1 = || (0, || 0, || 0, || 0, || 0, || 0, || 0, || ...) |- | s2 = || (1, || 1, || 1, || 1, || 1, || 1, || 1, || ...) |- | s3 = || (0, || 1, || 0, || 1, || 0, || 1, || 0, || ...) |- | s4 = || (1, || 0, || 1, || 0, || 1, || 0, || 1, || ...) |- | s5 = || (1, || 1, || 0, || 1, || 0, || 1, || 1, || ...) |- | s6 = || (0, || 0, || 1, || 1, || 0, || 1, || 1, || ...) |- | s7 = || (1, || 0, || 0, || 0, || 1, || 0, || 0, || ...) |- | ... |} Next, a sequence s is constructed by choosing the 1st digit as complementary to the 1st digit of s1 (swapping 0s for 1s and vice versa), the 2nd digit as complementary to the 2nd dig" https://en.wikipedia.org/wiki/Biological%20network,"A biological network is a method of representing systems as complex sets of binary interactions or relations between various biological entities. In general, networks or graphs are used to capture relationships between entities or objects. A typical graphing representation consists of a set of nodes connected by edges. History of networks As early as 1736 Leonhard Euler analyzed a real-world issue known as the Seven Bridges of Königsberg, which established the foundation of graph theory. From the 1930's-1950's the study of random graphs were developed. During the mid 1990's, it was discovered that many different types of ""real"" networks have structural properties quite different from random networks. In the late 2000's, scale-free and small-world networks began shaping the emergence of systems biology, network biology, and network medicine. In 2014, graph theoretical methods were used by Frank Emmert-Streib to analyze biological networks. In the 1980s, researchers started viewing DNA or genomes as the dynamic storage of a language system with precise computable finite states represented as a finite state machine. Recent complex systems research has also suggested some far-reaching commonality in the organization of information in problems from biology, computer science, and physics. Networks in biology Protein–protein interaction networks Protein-protein interaction networks (PINs) represent the physical relationship among proteins present in a cell, where proteins are nodes, and their interactions are undirected edges. Due to their undirected nature, it is difficult to identify all the proteins involved in an interaction. Protein–protein interactions (PPIs) are essential to the cellular processes and also the most intensely analyzed networks in biology. PPIs could be discovered by various experimental techniques, among which the yeast two-hybrid system is a commonly used technique for the study of binary interactions. Recently, high-throughput studies using " https://en.wikipedia.org/wiki/Indefinite%20product,"In mathematics, the indefinite product operator is the inverse operator of . It is a discrete version of the geometric integral of geometric calculus, one of the non-Newtonian calculi. Some authors use term discrete multiplicative integration. Thus More explicitly, if , then If F(x) is a solution of this functional equation for a given f(x), then so is CF(x) for any constant C. Therefore, each indefinite product actually represents a family of functions, differing by a multiplicative constant. Period rule If is a period of function then Connection to indefinite sum Indefinite product can be expressed in terms of indefinite sum: Alternative usage Some authors use the phrase ""indefinite product"" in a slightly different but related way to describe a product in which the numerical value of the upper limit is not given. e.g. . Rules List of indefinite products This is a list of indefinite products . Not all functions have an indefinite product which can be expressed in elementary functions. (see K-function) (see Barnes G-function) (see super-exponential function) See also Indefinite sum Product integral List of derivatives and integrals in alternative calculi Fractal derivative" https://en.wikipedia.org/wiki/BIBO%20stability,"In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded. A signal is bounded if there is a finite value such that the signal magnitude never exceeds , that is For discrete-time signals: For continuous-time signals: Time-domain condition for linear time-invariant systems Continuous-time necessary and sufficient condition For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, , be absolutely integrable, i.e., its L1 norm exists. Discrete-time sufficient condition For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its norm exists. Proof of sufficiency Given a discrete time LTI system with impulse response the relationship between the input and the output is where denotes convolution. Then it follows by the definition of convolution Let be the maximum value of , i.e., the -norm. (by the triangle inequality) If is absolutely summable, then and So if is absolutely summable and is bounded, then is bounded as well because . The proof for continuous-time follows the same arguments. Frequency-domain condition for linear time-invariant systems Continuous-time signals For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the ""largest pole"", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s" https://en.wikipedia.org/wiki/Allometry,"Allometry is the study of the relationship of body size to shape, anatomy, physiology and finally behaviour, first outlined by Otto Snell in 1892, by D'Arcy Thompson in 1917 in On Growth and Form and by Julian Huxley in 1932. Overview Allometry is a well-known study, particularly in statistical shape analysis for its theoretical developments, as well as in biology for practical applications to the differential growth rates of the parts of a living organism's body. One application is in the study of various insect species (e.g., Hercules beetles), where a small change in overall body size can lead to an enormous and disproportionate increase in the dimensions of appendages such as legs, antennae, or horns The relationship between the two measured quantities is often expressed as a power law equation (allometric equation) which expresses a remarkable scale symmetry: or in a logarithmic form, or similarly, where is the scaling exponent of the law. Methods for estimating this exponent from data can use type-2 regressions, such as major axis regression or reduced major axis regression, as these account for the variation in both variables, contrary to least-squares regression, which does not account for error variance in the independent variable (e.g., log body mass). Other methods include measurement-error models and a particular kind of principal component analysis. The allometric equation can also be acquired as a solution of the differential equation Allometry often studies shape differences in terms of ratios of the objects' dimensions. Two objects of different size, but common shape, have their dimensions in the same ratio. Take, for example, a biological object that grows as it matures. Its size changes with age, but the shapes are similar. Studies of ontogenetic allometry often use lizards or snakes as model organisms both because they lack parental care after birth or hatching and because they exhibit a large range of body sizes between the juv" https://en.wikipedia.org/wiki/Rice%20University%20Electrical%20and%20Computer%20Engineering,"The Rice University Department of Electrical and Computer Engineering is one of nine academic departments at the George R. Brown School of Engineering at Rice University. Ashutosh Sabharwal is the Department Chair. Originally the Rice Department of Electrical Engineering, it was renamed in 1984 to Electrical and Computer Engineering. Research Rice ECE Faculty perform research in the following areas: Computer Engineering; Data Science, Neuroengineering; Photonics, Electronics and Nano-devices, and Systems. Rice has a long history in digital signal processing (DSP) dating back to its inception in the late 1960s. Computer Engineering faculty have a research focus in analog and mixed-signal design, VLSI signal processing, computer architecture and embedded systems, biosensors and computer vision, and hardware security and storage systems, including applications to education. Biosensors and mobile wireless healthcare are growing application areas in embedded systems research. Smartphones with imaging devices are leading to new areas in computer vision and sensing. In the area of computer architecture, research interests include parallel computing, large-scale storage systems, and resource scheduling for performance and power. Data Science faculty integrate the foundations, tools and techniques involving data acquisition (sensors and systems), data analytics (machine learning, statistics), data storage and computing infrastructure (GPU/CPU computing, FPGAs, cloud computing, security and privacy) in order to enable meaningful extraction of actionable information from diverse and potentially massive data sources. Neuroengineering faculty are members of the Rice Center for Neuroengineering, a collaborative effort with Texas Medical Center researchers. They develop technology for treating and diagnosing neural diseases. Current research areas include interrogating neural circuits at the cellular level, analyzing neuronal data in real-time, and manipulating healthy or dise" https://en.wikipedia.org/wiki/Network%20virtualization%20platform,"A network virtualization platform decouples the hardware plane from the software plane such that the host hardware plane can be administratively programmed to assign its resources to the software plane. This allows for the virtualization of CPU, memory, disk and most importantly network IO. Upon such virtualization of hardware resources, the platform can accommodate multiple virtual network applications such as firewalls, routers, Web filters, and intrusion prevention systems, all functioning much like standalone hardware appliances, but contained within a single hardware appliance. The key benefit to such technology is doing all of this while maintaining the network performance typically seen with that of standalone network appliances as well as enabling the ability to administratively or dynamically program resources at will. Server virtualization history Server virtualization, a technology that has become mainstream, originally gained popularity when VMware entered the market in 2001 with its GSX server software. This technology gave IT organizations the ability to reduce the amount of rack space required to accommodate multiple servers and reduced the cost of powering and cooling data centers by consolidating server based applications onto a single piece of hardware. One of the problems with server virtualization is in how applications are networked together. Within a server virtualization environment, applications are interconnected by what is referred to as a virtual switch, which is very different from high-performing hardware-based network switches offered by the likes of Juniper Networks and Cisco Systems. Virtual switches are software-based switches and rely on the movement of packets up and down a software stack which relies on the same CPUs which are being used to drive the applications. Because of this software approach to switching, networking applications such as firewalls and routers, which require high levels of throughput and low levels of latenc" https://en.wikipedia.org/wiki/Counting%20rods,"Counting rods () are small bars, typically 3–14 cm (1"" to 6"") long, that were used by mathematicians for calculation in ancient East Asia. They are placed either horizontally or vertically to represent any integer or rational number. The written forms based on them are called rod numerals. They are a true positional numeral system with digits for 1–9 and a blank for 0, from the Warring states period (circa 475 BCE) to the 16th century. History Chinese arithmeticians used counting rods well over two thousand years ago. In 1954 forty-odd counting rods of the Warring States period (5th century BCE to 221 BCE) were found in Zuǒjiāgōngshān (左家公山) Chu Grave No.15 in Changsha, Hunan. In 1973 archeologists unearthed a number of wood scripts from a tomb in Hubei dating from the period of the Han dynasty (206 BCE to 220 CE). On one of the wooden scripts was written: ""当利二月定算𝍥"". This is one of the earliest examples of using counting-rod numerals in writing. A square lacquer box, dating from c. 168 BCE, containing a square chess board with the TLV patterns, chessmen, counting rods, and other items, was excavated in 1972, from Mawangdui M3, Changsha, Hunan Province. In 1976 a bundle of Western Han-era (202 BCE to 9 CE) counting rods made of bones was unearthed from Qianyang County in Shaanxi. The use of counting rods must predate it; Sunzi ( 544 to 496 BCE), a military strategist at the end of Spring and Autumn period of 771 BCE to 5th century BCE, mentions their use to make calculations to win wars before going into the battle; Laozi (died 531 BCE), writing in the Warring States period, said ""a good calculator doesn't use counting rods"". The Book of Han (finished 111 CE) recorded: ""they calculate with bamboo, diameter one fen, length six cun, arranged into a hexagonal bundle of two hundred seventy one pieces"". At first, calculating rods were round in cross-section, but by the time of the Sui dynasty (581 to 618 CE) mathematicians used triangular rods to represent po" https://en.wikipedia.org/wiki/Blue%20ice%20%28aviation%29,"In aviation, blue ice is frozen sewage material that has leaked mid-flight from commercial aircraft lavatory waste systems. It is a mixture of human biowaste and liquid disinfectant that freezes at high altitude. The name comes from the blue color of the disinfectant. Airlines are not allowed to dump their waste tanks mid-flight, and pilots have no mechanism by which to do so; however, leaks sometimes do occur from a plane's septic tank. Danger of ground impact There were at least 27 documented incidents of blue ice impacts in the United States between 1979 and 2003. These incidents typically happen under airport landing paths as the mass warms sufficiently to detach from the plane during its descent. A rare incident of falling blue ice causing damage to the roof of a home was reported on October 20, 2006 in Chino, California. A similar incident was reported in Leicester, UK, in 2007. In 1971, a chunk of ice from an aircraft tore a large hole in the roof of the Essex Street Chapel in Kensington, London, and was one trigger for the demolition of the building. In November 2011, a chunk of ice, the size of an orange, broke through the roof of a private house in Ratingen-Hösel, Germany. In February 2013, a ""football sized"" ball of blue ice smashed through a conservatory roof in Clanfield, Hampshire, causing around £10,000 worth of damage. In October 2016, a chunk of ice tore a hole in a private house in Amstelveen, The Netherlands. In two incidents in May 2018, chunks of blue ice fell onto residents in Kelowna, British Columbia. In November 2018, a chunk of ice fell from the sky and crashed through the roof of a home in Bristol, England. Danger to aircraft Blue ice can also be dangerous to the aircraft the National Transportation Safety Board has recorded three very similar incidents where waste from lavatories caused damage to the leaking aircraft, all involving Boeing 727s. In all three cases, waste from a leaking lavatory hit one (or the other) of the three" https://en.wikipedia.org/wiki/Label%20switching,"Label switching is a technique of network relaying to overcome the problems perceived by traditional IP-table switching (also known as traditional layer 3 hop-by-hop routing). Here, the switching of network packets occurs at a lower level, namely the data link layer rather than the traditional network layer. Each packet is assigned a label number and the switching takes place after examination of the label assigned to each packet. The switching is much faster than IP-routing. New technologies such as Multiprotocol Label Switching (MPLS) use label switching. The established ATM protocol also uses label switching at its core. According to (An Architecture for Differentiated Services, December 1998): ""Examples of the label switching (or virtual circuit) model include Frame Relay, ATM, and MPLS. In this model path forwarding state and traffic management or quality of service (QoS) state is established for traffic streams on each hop along a network path. Traffic aggregates of varying granularity are associated with a label switched path at an ingress node, and packets/cells within each label switched path are marked with a forwarding label that is used to look up the next-hop node, the per-hop forwarding behavior, and the replacement label at each hop. This model permits finer granularity resource allocation to traffic streams, since label values are not globally significant but are only significant on a single link; therefore resources can be reserved for the aggregate of packets/cells received on a link with a particular label, and the label switching semantics govern the next-hop selection, allowing a traffic stream to follow a specially engineered path through the network."" A related topic is ""Multilayer Switching,"" which discusses silicon-based wire-speed routing devices that examine not only layer 3 packet information, but also layer 4 (transport) and layer 7 (application) information." https://en.wikipedia.org/wiki/Adesto%20Technologies,"Adesto Technologies is an American corporation founded in 2006 and based in Santa Clara, California. The company provides application-specific semiconductors and embedded systems for the Internet of Things (IoT), and sells its products directly to original equipment manufacturers (OEMs) and original design manufacturers (ODMs) that manufacture products for its end customers. In 2020, Adesto was bought by Dialog Semiconductor. History Adesto Technologies was founded by Narbeh Derhacobian, Shane Hollmer, and Ishai Naveh in 2006. Derhacobian formerly served in senior technical and managerial roles at AMD, Virage Logic, and Cswitch Corporations. The company developed a non-volatile memory based on the movement of copper ions in a programmable metallization cell technology licensed from Axon Technologies Corp., a spinoff of Arizona State University. In October 2010, Adesto acquired intellectual property and patents related to Conductive Bridging Random Access Memory (CBRAM) technology from Qimonda AG, and their first CBRAM product began production in 2011. In 2015, the company held an initial public offering under the symbol IOTS, which entered the market at $5 per share. Underwriters included Needham & Company, Oppenheimer & Co. Inc., and Roth Capital Partners. The entire offering was valued at $28.75 million. Between May and September 2018, Adesto completed two acquisitions of S3 Semiconductors and Echelon Corporation. In May, the company acquired S3 Semiconductors, a provider of analog and mixed-signal ASICs and Intellectual Property (IP) cores. In June, the company announced its intention to buy Echelon Corporation, a home and industrial automation company, for $45 million. The acquisition was completed three months later. The company's offerings were expanded to include ASICs and IP from S3 Semiconductors and embedded systems from Echelon Corporation, in addition to its original non-volatile memory (NVM) products. In 2020, Adesto was acquired by Dialog Semicon" https://en.wikipedia.org/wiki/Census%20of%20Marine%20Zooplankton,"The Census of Marine Zooplankton is a field project of the Census of Marine Life that has aimed to produce a global assessment of the species diversity, biomass, biogeographic distribution, and genetic diversity of more than 7,000 described species of zooplankton that drift the ocean currents throughout their lives. CMarZ focuses on the deep sea, under-sampled regions, and biodiversity hotspots. From 2004 until 2011, Ann Bucklin was the lead scientist for the project. Technology plays a great role in CMarZ's research, including the use of integrated morphological and molecular sampling through DNA Barcoding. CMarZ makes its datasets available via the CMarZ Database." https://en.wikipedia.org/wiki/Altos%20586,"The Altos 586 was a multi-user microcomputer intended for the business market. It was introduced by Altos Computer Systems in 1983. A configuration with 512 kB of RAM, an Intel 8086 processor, Microsoft Xenix, and 10 MB hard drive cost about US$8,000. 3Com offered this Altos 586 product as a file server for their IBM PC networking solution in spring 1983. The network was 10BASE2 (thin-net) based, with an Ethernet AUI port on the Altos 586. Reception BYTE in August 1984 called the Altos 586 ""an excellent multiuser UNIX system"", with ""the best performance"" for the price among small Unix systems. The magazine reported that a Altos with 512 kB RAM and 40 MB hard drive ""under moderate load approaches DEC VAX performance for most tasks that a user would normally invoke"". A longer review in March 1985 stated that ""despite some bugs, it's a good product"". It criticized the documentation and lack of customer service for developers, but praised the multiuser performance. The author reported that his 586 had run a multiuser bulletin board system 24 hours a day for more than two years with no hardware failures. He concluded that ""Very few UNIX or XENIX computers can provide all of the features of the 586 for $8990"", especially for multiuser turnkey business users. See also Fortune XP 20" https://en.wikipedia.org/wiki/Copurification,"Copurification in a chemical or biochemical context is the physical separation by chromatography or other purification technique of two or more substances of interest from other contaminating substances. For substances to co-purify usually implies that these substances attract each other to form a non-covalent complex such as in a protein complex. However, when fractionating mixtures, especially mixtures containing large numbers of components (for example a cell lysate), it is possible by chance that some components may copurify even though they don't form complexes. In this context the term copurification is sometimes used to denote when two biochemical activities or some other property are isolated together after purification but it is not certain if the sample has been purified to homogeneity (i.e., contains only one molecular species or one molecular complex). Hence these activities or properties are likely but not guaranteed to reside on the same molecule or in the same molecular complex. Applications Copurification procedures, such as co-immunoprecipitation, are commonly used to analyze interactions between proteins. Copurification is one method used to map the interactome of living organisms." https://en.wikipedia.org/wiki/Phase%20vocoder,"A phase vocoder is a type of vocoder-purposed algorithm which can interpolate information present in the frequency and time domains of audio signals by using phase information extracted from a frequency transform. The computer algorithm allows frequency-domain modifications to a digital sound file (typically time expansion/compression and pitch shifting). At the heart of the phase vocoder is the short-time Fourier transform (STFT), typically coded using fast Fourier transforms. The STFT converts a time domain representation of sound into a time-frequency representation (the ""analysis"" phase), allowing modifications to the amplitudes or phases of specific frequency components of the sound, before resynthesis of the time-frequency domain representation into the time domain by the inverse STFT. The time evolution of the resynthesized sound can be changed by means of modifying the time position of the STFT frames prior to the resynthesis operation allowing for time-scale modification of the original sound file. Phase coherence problem The main problem that has to be solved for all cases of manipulation of the STFT is the fact that individual signal components (sinusoids, impulses) will be spread over multiple frames and multiple STFT frequency locations (bins). This is because the STFT analysis is done using overlapping analysis windows. The windowing results in spectral leakage such that the information of individual sinusoidal components is spread over adjacent STFT bins. To avoid border effects of tapering of the analysis windows, STFT analysis windows overlap in time. This time overlap results in the fact that adjacent STFT analyses are strongly correlated (a sinusoid present in analysis frame at time ""t"" will be present in the subsequent frames as well). The problem of signal transformation with the phase vocoder is related to the problem that all modifications that are done in the STFT representation need to preserve the appropriate correlation between adja" https://en.wikipedia.org/wiki/Collision%20avoidance%20%28networking%29,"In computer networking and telecommunication, collision-avoidance methods try to avoid resource contention by attempting to avoid simultaneous attempts to access the same resource. Collision-avoidance methods include prior scheduling of timeslots, carrier-detection schemes, randomized access times, and exponential backoff after collision detection. See also Carrier sense multiple access with collision avoidance Polling Collision domain External links Channel access methods Computer networking" https://en.wikipedia.org/wiki/Decapping,"Decapping (decapsulation) or delidding of an integrated circuit is the process of removing the protective cover or integrated heat spreader (IHS) of an integrated circuit so that the contained die is revealed for visual inspection of the micro circuitry imprinted on the die. This process is typically done in order to debug a manufacturing problem with the chip, or possibly to copy information from the device, to check for counterfeit chips or to reverse engineer it. Companies such as TechInsights and ChipRebel decap, take die shots of, and reverse engineer chips for customers. Modern integrated circuits can be encapsulated in plastic, ceramic, or epoxy packages. Delidding may also be done in an effort to reduce the operating temperatures of an integrated circuit such as a processor, by replacing the thermal interface material (TIM) between the die and the IHS with a higher-quality TIM. With care, it's possible to decap a device and still leave it functional. Method Decapping is usually carried out by chemical etching of the covering, laser cutting, laser evaporation of the covering, plasma etching or mechanical removal of the cover using a milling machine, saw blade or by desoldering and cutting. The process can be either destructive or non-destructive of the internal die. Chemical etching usually involves subjecting the (if made of plastic) IC package to concentrated or fuming nitric acid, heated concentrated sulfuric acid, white fuming nitric acid or a mixture of the two for some time, possibly while applying heat externally with a hot plate or hot air gun, which dissolve the package while leaving the die intact. The acids are dangerous, so protective equipment such as appropriate gloves, full face respirator with appropriate acid cartridges, a lab coat and a fume hood are required. Laser decapping scans a high power laser beam across the plastic IC package to vaporize it, while avoiding the actual silicon die. In a common version of non-destructive, mechani" https://en.wikipedia.org/wiki/Computer%20engineering%20compendium,"This is a list of the individual topics in Electronics, Mathematics, and Integrated Circuits that together make up the Computer Engineering field. The organization is by topic to create an effective Study Guide for this field. The contents match the full body of topics and detail information expected of a person identifying themselves as a Computer Engineering expert as laid out by the National Council of Examiners for Engineering and Surveying. It is a comprehensive list and superset of the computer engineering topics generally dealt with at any one time. Part 1 - Basics Character Encoding Character (computing) Universal Character Set IEEE 1394 ASCII Math Bitwise operation Signed number representations IEEE floating point Operators in C and C++ De Morgan's laws Booth's multiplication algorithm Binary multiplier Wallace tree Dadda multiplier Multiply–accumulate operation Big O notation Euler's identity Basic Electronics Series and parallel circuits RLC circuit Transistor Operational amplifier applications Signal Processing Signal processing Digital filter Fast Fourier transform Cooley–Tukey FFT algorithm Modified discrete cosine transform Digital signal processing Analog-to-digital converter Error Detection/Correction Parity bit Error detection and correction Cyclic redundancy check Hamming code Hamming(7,4) Convolutional code Forward error correction Noisy-channel coding theorem Modulation Signal-to-noise ratio Linear code Noise (electronics) Part 2 - Hardware Hardware Logic family Multi-level cell Flip-flop (electronics) Race condition Binary decision diagram Circuit minimization for Boolean functions Karnaugh map Quine–McCluskey algorithm Integrated circuit design Programmable Logic Standard cell Programmable logic device Field-programmable gate array Complex programmable logic device Application-specific integrated circuit Logic optimization Register-transfer level Floorplan (microelectronics) Hardware description language VHDL Verilog Electronic des" https://en.wikipedia.org/wiki/IEEE%20P1906.1,"The IEEE P1906.1 - Recommended Practice for Nanoscale and Molecular Communication Framework is a standards working group sponsored by the IEEE Communications Society Standards Development Board whose goal is to develop a common framework for nanoscale and molecular communication. Because this is an emerging technology, the standard is designed to encourage innovation by reaching consensus on a common definition, terminology, framework, goals, metrics, and use-cases that encourage innovation and enable the technology to advance at a faster rate. The draft passed an initial sponsor balloting with comments on January 2, 2015. The comments were addressed by the working group and the resulting draft ballot passed again on August 17, 2015. Finally, additional material regarding SBML was contributed and the final draft passed again on October 15, 2015. The draft standard was approved by IEEE RevCom in the final quarter of 2015. Membership Working group membership includes experts in industry and academia with strong backgrounds in mathematical modeling, engineering, physics, economics and biological sciences. Content Electronic components such as transistors, or electrical/electromagnetic message carriers whose operation is similar at the macroscale and nanoscale are excluded from the definition. A human-engineered, synthetic component must form part of the system because it is important to avoid standardizing nature or physical processes. The definition of communication, particularly in the area of cell-surface interactions as viewed by biologists versus non-biologists has been a topic of debate. The interface is viewed as a communication channel, whereas the 'receptor-signaling-gene expression' events are the network. The draft currently comprises: definition, terminology, framework, metrics, use-cases, and reference code (ns-3). The standard provides a very broad foundation and encompasses all approaches to nanoscale communication. While there have been many su" https://en.wikipedia.org/wiki/Vibration%20theory%20of%20olfaction,"The vibration theory of smell proposes that a molecule's smell character is due to its vibrational frequency in the infrared range. This controversial theory is an alternative to the more widely accepted docking theory of olfaction (formerly termed the shape theory of olfaction), which proposes that a molecule's smell character is due to a range of weak non-covalent interactions between its protein odorant receptor (found in the nasal epithelium), such as electrostatic and Van der Waals interactions as well as H-bonding, dipole attraction, pi-stacking, metal ion, Cation–pi interaction, and hydrophobic effects, in addition to the molecule's conformation. Introduction The current vibration theory has recently been called the ""swipe card"" model, in contrast with ""lock and key"" models based on shape theory. As proposed by Luca Turin, the odorant molecule must first fit in the receptor's binding site. Then it must have a vibrational energy mode compatible with the difference in energies between two energy levels on the receptor, so electrons can travel through the molecule via inelastic electron tunneling, triggering the signal transduction pathway. The vibration theory is discussed in a popular but controversial book by Chandler Burr. The odor character is encoded in the ratio of activities of receptors tuned to different vibration frequencies, in the same way that color is encoded in the ratio of activities of cone cell receptors tuned to different frequencies of light. An important difference, though, is that the odorant has to be able to become resident in the receptor for a response to be generated. The time an odorant resides in a receptor depends on how strongly it binds, which in turn determines the strength of the response; the odor intensity is thus governed by a similar mechanism to the ""lock and key"" model. For a pure vibrational theory, the differing odors of enantiomers, which possess identical vibrations, cannot be explained. However, once the link betwe" https://en.wikipedia.org/wiki/List%20of%20Mersenne%20primes%20and%20perfect%20numbers,"Mersenne primes and perfect numbers are two deeply interlinked types of natural numbers in number theory. Mersenne primes, named after the friar Marin Mersenne, are prime numbers that can be expressed as for some positive integer . For example, is a Mersenne prime as it is a prime number and is expressible as . The numbers corresponding to Mersenne primes must themselves be prime, although not all primes lead to Mersenne primes—for example, . Meanwhile, perfect numbers are natural numbers that equal the sum of their positive proper divisors, which are divisors excluding the number itself. So, is a perfect number because the proper divisors of are , and , and . There is a one-to-one correspondence between the Mersenne primes and the even perfect numbers. This is due to the Euclid–Euler theorem, partially proved by Euclid and completed by Leonhard Euler: even numbers are perfect if and only if they can be expressed in the form , where is a Mersenne prime. In other words, all numbers that fit that expression are perfect, while all even perfect numbers fit that form. For instance, in the case of , is prime, and is perfect. It is currently an open problem as to whether there are an infinite number of Mersenne primes and even perfect numbers. The frequency of Mersenne primes is the subject of the Lenstra–Pomerance–Wagstaff conjecture, which states that the expected number of Mersenne primes less than some given is , where is Euler's number, is Euler's constant, and is the natural logarithm. It is also not known if any odd perfect numbers exist; various conditions on possible odd perfect numbers have been proven, including a lower bound of . The following is a list of all currently known Mersenne primes and perfect numbers, along with their corresponding exponents . , there are 51 known Mersenne primes (and therefore perfect numbers), the largest 17 of which have been discovered by the distributed computing project Great Internet Mersenne Prime Search, or G" https://en.wikipedia.org/wiki/MERMOZ,"MERMOZ (also, MERMOZ project and Monitoring planEtary suRfaces with Modern pOlarimetric characteriZation) is an astrobiology project designed to remotely detect biosignatures of life. Detection is based on molecular homochirality, a characteristic property of the biochemicals of life. The aim of the project is to remotely identify and characterize life on the planet Earth from space, and to extend this technology to other solar system bodies and exoplanets. The project began in 2018, and is a collaboration of the University of Bern, University of Leiden and Delft University of Technology. According to a member of the research team, “When light is reflected by biological matter, a part of the light’s electromagnetic waves will travel in either clockwise or counterclockwise spirals ... This phenomenon is called circular polarization and is caused by the biological matter’s homochirality.” These unique spirals of light indicate living materials; whereas, non-living materials do not reflect such unique spirals of light, according to the researchers. The research team conducted feasibility studies, using a newly designed detection instrument, based on circular spectropolarimetry, and named FlyPol+ (an upgrade from the original FlyPol), by flying in a helicopter at an altitude of and velocity of for 25 minutes. The results were successful in remotely detecting living material, and quickly (within seconds) distinguishing living material from non-living material. The researchers concluded: ""Circular spectropolarimetry can be a powerful technique to detect life beyond Earth, and we emphasize the potential of utilizing circular spectropolarimetry as a remote sensing tool to characterize and monitor in detail the vegetation physiology and terrain features of Earth itself."" The researchers next expect to scan the Earth from the International Space Station (ISS) with their detection instruments. One consequence of further successful studies is a possible pathfinder space m" https://en.wikipedia.org/wiki/Free%20variables%20and%20bound%20variables,"In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. The terms are opposites. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol. In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context. An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, ""...where is a positive integer"".) A variable symbol overall is bound if at least one occurrence of it is bound.pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,p.78 hence ""free"" and ""bound"" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function. While the domain of discourse in many contexts is understood, when an explicit range of values for the bou" https://en.wikipedia.org/wiki/Phenology,"Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). Examples include the date of emergence of leaves and flowers, the first flight of butterflies, the first appearance of migratory birds, the date of leaf colouring and fall in deciduous trees, the dates of egg-laying of birds and amphibia, or the timing of the developmental cycles of temperate-zone honey bee colonies. In the scientific literature on ecology, the term is used more generally to indicate the time frame for any seasonal biological phenomena, including the dates of last appearance (e.g., the seasonal phenology of a species may be from April through September). Because many such phenomena are very sensitive to small variations in climate, especially to temperature, phenological records can be a useful proxy for temperature in historical climatology, especially in the study of climate change and global warming. For example, viticultural records of grape harvests in Europe have been used to reconstruct a record of summer growing season temperatures going back more than 500 years. In addition to providing a longer historical baseline than instrumental measurements, phenological observations provide high temporal resolution of ongoing changes related to global warming. Etymology The word is derived from the Greek φαίνω (phainō), ""to show, to bring to light, make to appear"" + λόγος (logos), amongst others ""study, discourse, reasoning"" and indicates that phenology has been principally concerned with the dates of first occurrence of biological events in their annual cycle. The term was first used by Charles François Antoine Morren, a professor of botany at the University of Liège (Belgium). Morren was a student of Adolphe Quetelet. Quetelet made plant phenological observations at the Royal Observatory of Belgium in Brussels. He is considered ""one of 19th century t" https://en.wikipedia.org/wiki/Copulas%20in%20signal%20processing,"A copula is a mathematical function that provides a relationship between marginal distributions of random variables and their joint distributions. Copulas are important because it represents a dependence structure without using marginal distributions. Copulas have been widely used in the field of finance, but their use in signal processing is relatively new. Copulas have been employed in the field of wireless communication for classifying radar signals, change detection in remote sensing applications, and EEG signal processing in medicine. In this article, a short introduction to copulas is presented, followed by a mathematical derivation to obtain copula density functions, and then a section with a list of copula density functions with applications in signal processing. Introduction Using Sklar's theorem, a copula can be described as a cumulative distribution function (CDF) on a unit-space with uniform marginal distributions on the interval (0, 1). The CDF of a random variable X is the probability that X will take a value less than or equal to x when evaluated at x itself. A copula can represent a dependence structure without using marginal distributions. Therefore, it is simple to transform the uniformly distributed variables of copula (u, v, and so on) into the marginal variables (x, y, and so on) by the inverse marginal cumulative distribution function. Using the chain rule, copula distribution function can be partially differentiated with respect to the uniformly distributed variables of copula, and it is possible to express the multivariate probability density function (PDF) as a product of a multivariate copula density function and marginal PDF''s. The mathematics for converting a copula distribution function into a copula density function is shown for a bivariate case, and a family of copulas used in signal processing are listed in a TABLE 1. Mathematical derivation For any two random variables X and Y, the continuous joint probability distribution functi" https://en.wikipedia.org/wiki/Klepton,"In biology, a klepton (abbr. kl.) and synklepton (abbr sk.) is a species that requires input from another biological taxon (normally from a species which is closely related to the kleptonic species) to complete its reproductive cycle. Specific types of kleptons are zygokleptons, which reproduce by zygogenesis; gynokleptons which reproduce by gynogenesis, and tychokleptons, which reproduce by a combination of both systems. Kleptogenic reproduction results in three potential outcomes. A unisexual female may simply activate cell division in the egg through the presence of a male's sperm without incorporating any of his genetic material—this results in the production of clonal offspring. The female may also incorporate the male's sperm into her egg, but can do so without excising any of her genetic material. This results in increased ploidy levels that range from triploid to pentaploid in wild individuals. Finally, the female also has the option of replacing some of her genetic material with that of the male's, resulting in a ""hybrid"" of sorts without increasing ploidy. Etymology The term is derived from the (Ancient or Modern) Greek κλέπτ(ης) (klépt(ēs), “thief”) + -on, after taxon, or kleptein, ""to steal"". A klepton ""steals"" from an exemplar of another species in order to reproduce. In a paper entitled ""Taxonomy of Parthenogenetic Species of Hybrid Origin"", Charles J. Cole argues that the thief motif closely parallels the behaviour of certain reptiles. Examples Salamander species In the wild, five species of Ambystoma salamanders contribute to a unisexual complex that reproduces via a combination of gynogenesis and kleptogenesis: A. tigrinum, A. barbouri, A. texanum, A. jeffersonium, and A. laterale. Over twenty genomic combinations have been found in nature, ranging from ""LLJ"" individuals (two A. laterale and an A. jeffersonium genome) to ""LJTi"" individuals (an A. laterale, A. jeffersonium, and an A. tigrinum genome). Every combination, however, contains the gen" https://en.wikipedia.org/wiki/Pi%20%28art%20project%29,"Pi is the name of a multimedia installation in the vicinity of the Viennese Karlsplatz. Pi is located in the Opernpassage between the entrance to the subway and the subway stop in Secession near the Naschmarkt. The individual behind the project was the Canadian artist Ken Lum from Vancouver. Pi, under construction from January 2005 to November 2006 and opened in December 2006, consists of statistical information and a representation of π to 478 decimal places. A more recent project is the calculation of the decimal places of π, indicating the importance of the eponymous media for installation of their number and infinity. The exhibit is 130 meters long. In addition to the number pi, there is a total of 16 factoids of reflective display cases that convey a variety of statistical data in real time. Apart from the World population there are also topics such as the worldwide number of malnourished children and the growth of Sahara since the beginning of the year. Even less serious issues such as the number of eaten Wiener Schnitzels in Vienna of the given year and the current number of lovers in Vienna are represented. In the middle of the passage standing there is a glass case with images, texts and books on the subjects of population and migration. The scientific data were developed jointly by Ken Lum and the . ""Pi"" is to show that contemporary art is in a position to connect art to science, architecture and sociology. The aim of this project was to transform the Karlsplatz into a ""vibrant place to meet, with communicative artistic brilliance.""" https://en.wikipedia.org/wiki/Integraph,"An Integraph is a mechanical analog computing device for plotting the integral of a graphically defined function. History Gaspard-Gustave de Coriolis first described the fundamental principal of a mechanical integraph in 1836 in the Journal de Mathématiques Pures et Appliquées. A full description of an integraph was published independently around 1880 by both British physicist Sir Charles Vernon Boys and Bruno Abdank-Abakanowicz, a Polish-Lithuanian mathematician/electrical engineer. Boys described a design for an integraph in 1881 in the Philosophical Magazine. Abakanowicz developed a practical working prototype in 1878, with improved versions of the prototype being manufactured by firms such as Coradi in Zürich, Switzerland. Customized and further improved versions of Abakanowicz's design were manufactured until well after 1900, with these later modifications being made by Abakanowicz in collaboration M. D. Napoli, the ""principal inspector of the railroad Chemin de Fer de l’Est and head of its testing laboratory"". Description The input to the integraph is a tracing point that is the guiding point that traces the differential curve. The output is defined by the path a disk that rolls along the paper without slipping takes. The mechanism sets the angle of the output disk based on the position of the input curve: if the input is zero, the disk is angled to roll straight, parallel to the x axis on the Cartesian plane. If the input is above zero the disk is angled slightly toward the positive y direction, such that the y value of its position increases as it rolls in that direction. If the input is below zero, the disk is angled the other way such that its y position decreases as it rolls. The hardware consists of a rectangular carriage which moves left to right on rollers. Two sides of the carriage run parallel to the x axis. The other two sides are parallel to the y axis. Along the trailing vertical (y axis) rail slides a smaller carriage holding a tracing point." https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Tenenbaum%E2%80%93Ford%20constant,"The Erdős–Tenenbaum–Ford constant is a mathematical constant that appears in number theory. Named after mathematicians Paul Erdős, Gérald Tenenbaum, and Kevin Ford, it is defined as where is the natural logarithm. Following up on earlier work by Tenenbaum, Ford used this constant in analyzing the number of integers that are at most and that have a divisor in the range . Multiplication table problem For each positive integer , let be the number of distinct integers in an multiplication table. In 1960, Erdős studied the asymptotic behavior of and proved that as ." https://en.wikipedia.org/wiki/Power-line%20communication,"Power-line communication (also known as power-line carrier), abbreviated as PLC, carries data on a conductor that is also used simultaneously for AC electric power transmission or electric power distribution to consumers. In the past, powerlines were solely used for transmitting electricity. But with the advent of advanced networking technologies, including broadband, there's a push for utility and service providers to find cost-effective and high-performance solutions. It's only recently that businesses have started to seriously consider using powerlines for data networking. The possibility of using powerlines as a universal medium to transmit not just electricity or control signals, but also high-speed data and multimedia, is now under investigation. A wide range of power-line communication technologies are needed for different applications, ranging from home automation to Internet access which is often called broadband over power lines (BPL). Most PLC technologies limit themselves to one type of wires (such as premises wiring within a single building), but some can cross between two levels (for example, both the distribution network and premises wiring). Typically transformers prevent propagating the signal, which requires multiple technologies to form very large networks. Various data rates and frequencies are used in different situations. A number of difficult technical problems are common between wireless and power-line communication, notably those of spread spectrum radio signals operating in a crowded environment. Radio interference, for example, has long been a concern of amateur radio groups. Basics Power-line communications systems operate by adding a modulated carrier signal to the wiring system. Different types of power-line communications use different frequency bands. Since the power distribution system was originally intended for transmission of AC power at typical frequencies of 50 or 60 Hz, power wire circuits have only a limited ability to" https://en.wikipedia.org/wiki/Compact%20Model%20Coalition,"The Compact Model Coalition (formerly the Compact Model Council) is a working group in the Electronic Design Automation industry formed to choose, maintain and promote the use of standard semiconductor device models. Commercial and industrial analog simulators (such as SPICE) need to add device models as technology advances (see Moore's law) and earlier models become inaccurate. Before this group was formed, new transistor models were largely proprietary, which severely limited the choice of simulators that could be used. It was formed in August, 1996, for the purpose developing and standardizing the use and implementation of SPICE models and the model interfaces. In May 2013, the Silicon Integration Initiative (Si2) and TechAmerica announced the transfer of the Compact Model Council to Si2 and a renaming to Compact Model Coalition. New models are submitted to the Coalition, where their technical merits are discussed, and then potential standard models are voted on. Some of the models supported by the Compact Modeling Coalition include: BSIM3, a MOSFET model from UC Berkeley (see BSIM). BSIM4, a more modern MOSFET model, also from UC Berkeley. PSP, another MOSFET model. PSP originally stood for Penn State-Philips, but one author moved to ASU, and Philips spun off their semiconductor group as NXP Semiconductors. PSP is now developed and supported at CEA-Leti. BSIMSOI, a model for silicon on insulator MOSFETs. L-UTSOI, a model for fully-depleted silicon on insulator MOSFETs, developed and supported by CEA-Leti. HICUM or HIgh CUrrent Model for bipolar transistors, from CEDIC, Dresden University of Technology, Germany, and UC San Diego, USA. MEXTRAM, a compact model for bipolar transistors that aims to support the design of bipolar transistor circuits at high frequencies in Si and SiGe based process technologies. MEXTRAM was originally developed at NXP Semiconductors and is now developed and supported at Auburn University. ASM-HEMT, and MVSG, the newest standard " https://en.wikipedia.org/wiki/Host%20system,"Host system is any networked computer that provides services to other systems or users. These services may include printer, web or database access. Host system is a computer on a network, which provides services to users or other computers on that network. Host system usually runs a multi-user operating system such as Unix, MVS or VMS, or at least an operating system with network services such as Windows. Computer networking fr:Système hôte" https://en.wikipedia.org/wiki/Cognitive%20hearing%20science,"Cognitive hearing science is an interdisciplinary science field concerned with the physiological and cognitive basis of hearing and its interplay with signal processing in hearing aids. The field includes genetics, physiology, medical and technical audiology, cognitive neuroscience, cognitive psychology, linguistics and social psychology. Theoretically the research in cognitive hearing science combines a physiological model for the information transfer from the outer auditory organ to the auditory cerebral cortex, and a cognitive model for how language comprehension is influenced by the interplay between the incoming language signal and the individual's cognitive skills, especially the long-term memory and the working memory. Researchers examine the interplay between type of hearing impairment or deafness, type of signal processing in different hearing aids, type of listening environment and the individual's cognitive skills. Research in cognitive hearing science has importance for the knowledge about different types of hearing impairment and its effects, as for the possibilities to determine which individuals can make use of certain type of signal processing in hearing aid or cochlear implant and thereby adapt hearing aid to the individual. Cognitive hearing science has been introduced by researchers at the Linköping University research centre Linnaeus Centre HEAD (HEaring And Deafness) in Sweden, created in 2008 with a major 10-year grant from the Swedish Research Council." https://en.wikipedia.org/wiki/Unreasonable%20ineffectiveness%20of%20mathematics,"The unreasonable ineffectiveness of mathematics is a phrase that alludes to the article by physicist Eugene Wigner, ""The Unreasonable Effectiveness of Mathematics in the Natural Sciences"". This phrase is meant to suggest that mathematical analysis has not proved as valuable in other fields as it has in physics. Life sciences I. M. Gelfand, a mathematician who worked in biomathematics and molecular biology, as well as many other fields in applied mathematics, is quoted as stating, Eugene Wigner wrote a famous essay on the unreasonable effectiveness of mathematics in natural sciences. He meant physics, of course. There is only one thing which is more unreasonable than the unreasonable effectiveness of mathematics in physics, and this is the unreasonable ineffectiveness of mathematics in biology. An opposing view is given by Leonard Adleman, a theoretical computer scientist who pioneered the field of DNA computing. In Adleman's view, ""Sciences reach a point where they become mathematized,"" starting at the fringes but eventually ""the central issues in the field become sufficiently understood that they can be thought about mathematically. It occurred in physics about the time of the Renaissance; it began in chemistry after John Dalton developed atomic theory"" and by the 1990s was taking place in biology. By the early 1990s, ""Biology was no longer the science of things that smelled funny in refrigerators (my view from undergraduate days in the 1960s). The field was undergoing a revolution and was rapidly acquiring the depth and power previously associated exclusively with the physical sciences. Biology was now the study of information stored in DNA - strings of four letters: A, T, G, and C and the transformations that information undergoes in the cell. There was mathematics here!"" Economics and finance K. Vela Velupillai wrote of The unreasonable ineffectiveness of mathematics in economics. To him ""the headlong rush with which economists have equipped themselves with " https://en.wikipedia.org/wiki/PHI-base,"https://canto.phi-base.org/ The Pathogen-Host Interactions database (PHI-base) is a biological database that contains curated information on genes experimentally proven to affect the outcome of pathogen-host interactions. The database is maintained by researchers at Rothamsted Research, together with external collaborators since 2005. Since April 2017 PHI-base is part of ELIXIR, the European life-science infrastructure for biological information via its ELIXIR-UK node. Background The Pathogen-Host Interactions database was developed to utilise effectively the growing number of verified genes that mediate an organism's ability to cause disease and / or to trigger host responses. The web-accessible database catalogues experimentally verified pathogenicity, virulence and effector genes from bacterial, fungal and oomycete pathogens which infect animal, plant and fungal hosts. PHI-base is the first on-line resource devoted to the identification and presentation of information on fungal and oomycete pathogenicity genes and their host interactions. As such, PHI-base aims to be a resource for the discovery of candidate targets in medically and agronomically important fungal and oomycete pathogens for intervention with synthetic chemistries and natural products (fungicides). Each entry in PHI-base is curated by domain experts and supported by strong experimental evidence (gene disruption experiments) as well as literature references in which the experiments are described. Each gene in PHI-base is presented with its nucleotide and deduced amino acid sequence as well as a detailed structured description of the predicted protein's function during the host infection process. To facilitate data interoperability, genes are annotated using controlled vocabularies (Gene Ontology terms, EC Numbers, etc.), and links to other external data sources such as UniProt, EMBL and the NCBI taxonomy services. Current developments Version 4.15 (May 2, 2023) of PHI-base provides informa" https://en.wikipedia.org/wiki/Molybdovanadate%20reagent,"The molybdovanadate reagent is a solution containing both the molybdate and vanadate ions. It is commonly used in the determination of phosphate ion content. The reagent used is ammonium molybdovanadate with the addition of 70% perchloric acid (sulfuric acid is also known to be used). It is used for purposes such as the analysis of wine, canned fruits and other fruit-based products such as jams and syrups. Physical properties The reagent appears as a clear, yellow liquid without odour. It is harmful if inhaled, a recognised carcinogen and can cause eye burns." https://en.wikipedia.org/wiki/Hybrid%20%28biology%29,"In biology, a hybrid is the offspring resulting from combining the qualities of two organisms of different varieties, species or genera through sexual reproduction. Generally, it means that each cell has genetic material from two different organisms, whereas an individual where some cells are derived from a different organism is called a chimera. Hybrids are not always intermediates between their parents (such as in blending inheritance), but can show hybrid vigor, sometimes growing larger or taller than either parent. The concept of a hybrid is interpreted differently in animal and plant breeding, where there is interest in the individual parentage. In genetics, attention is focused on the numbers of chromosomes. In taxonomy, a key question is how closely related the parent species are. Species are reproductively isolated by strong barriers to hybridization, which include genetic and morphological differences, differing times of fertility, mating behaviors and cues, and physiological rejection of sperm cells or the developing embryo. Some act before fertilization and others after it. Similar barriers exist in plants, with differences in flowering times, pollen vectors, inhibition of pollen tube growth, somatoplastic sterility, cytoplasmic-genic male sterility and the structure of the chromosomes. A few animal species and many plant species, however, are the result of hybrid speciation, including important crop plants such as wheat, where the number of chromosomes has been doubled. Human impact on the environment has resulted in an increase in the interbreeding between regional species, and the proliferation of introduced species worldwide has also resulted in an increase in hybridization. This genetic mixing may threaten many species with extinction, while genetic erosion from monoculture in crop plants may be damaging the gene pools of many species for future breeding. A form of often intentional human-mediated hybridization is the crossing of wild and domestic" https://en.wikipedia.org/wiki/Generalized%20inverse,"In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix . A matrix is a generalized inverse of a matrix if A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse. Motivation Consider the linear system where is an matrix and the column space of . If is nonsingular (which implies ) then will be the solution of the system. Note that, if is nonsingular, then Now suppose is rectangular (), or square and singular. Then we need a right candidate of order such that for all That is, is a solution of the linear system . Equivalently, we need a matrix of order such that Hence we can define the generalized inverse as follows: Given an matrix , an matrix is said to be a generalized inverse of if The matrix has been termed a regular inverse of by some authors. Types Important types of generalized inverse include: One-sided inverse (right inverse or left inverse) Right inverse: If the matrix has dimensions and , then there exists an matrix called the right inverse of such that , where is the identity matrix. Left inverse: If the matrix has dimensions and , then there exists an matrix called the left inverse of such that , where is the identity matrix. Bott–Duffin inverse Drazin inverse Moore–Penrose inverse Some generalized inverses are defined and classified based on the Penrose conditions: where denotes conjugate transpose" https://en.wikipedia.org/wiki/Percentage%20point,"A percentage point or percent point is the unit for the arithmetic difference between two percentages. For example, moving up from 40 percent to 44 percent is an increase of 4 percentage points (although it is a 10-percent increase in the quantity being measured, if the total amount remains the same). In written text, the unit (the percentage point) is usually either written out, or abbreviated as pp, p.p., or %pt. to avoid confusion with percentage increase or decrease in the actual quantity. After the first occurrence, some writers abbreviate by using just ""point"" or ""points"". Differences between percentages and percentage points Consider the following hypothetical example: In 1980, 50 percent of the population smoked, and in 1990 only 40 percent of the population smoked. One can thus say that from 1980 to 1990, the prevalence of smoking decreased by 10 percentage points (or by 10 percent of the population) or by 20 percent when talking about smokers only – percentages indicate proportionate part of a total. Percentage-point differences are one way to express a risk or probability. Consider a drug that cures a given disease in 70 percent of all cases, while without the drug, the disease heals spontaneously in only 50 percent of cases. The drug reduces absolute risk by 20 percentage points. Alternatives may be more meaningful to consumers of statistics, such as the reciprocal, also known as the number needed to treat (NNT). In this case, the reciprocal transform of the percentage-point difference would be 1/(20pp) = 1/0.20 = 5. Thus if 5 patients are treated with the drug, one could expect to cure one more patient than would have occurred in the absence of the drug. For measurements involving percentages as a unit, such as, growth, yield, or ejection fraction, statistical deviations and related descriptive statistics, including the standard deviation and root-mean-square error, the result should be expressed in units of percentage points instead of percentage" https://en.wikipedia.org/wiki/Biological%20rules,"A biological rule or biological law is a generalized law, principle, or rule of thumb formulated to describe patterns observed in living organisms. Biological rules and laws are often developed as succinct, broadly applicable ways to explain complex phenomena or salient observations about the ecology and biogeographical distributions of plant and animal species around the world, though they have been proposed for or extended to all types of organisms. Many of these regularities of ecology and biogeography are named after the biologists who first described them. From the birth of their science, biologists have sought to explain apparent regularities in observational data. In his biology, Aristotle inferred rules governing differences between live-bearing tetrapods (in modern terms, terrestrial placental mammals). Among his rules were that brood size decreases with adult body mass, while lifespan increases with gestation period and with body mass, and fecundity decreases with lifespan. Thus, for example, elephants have smaller and fewer broods than mice, but longer lifespan and gestation. Rules like these concisely organized the sum of knowledge obtained by early scientific measurements of the natural world, and could be used as models to predict future observations. Among the earliest biological rules in modern times are those of Karl Ernst von Baer (from 1828 onwards) on embryonic development, and of Constantin Wilhelm Lambert Gloger on animal pigmentation, in 1833. There is some scepticism among biogeographers about the usefulness of general rules. For example, J.C. Briggs, in his 1987 book Biogeography and Plate Tectonics, comments that while Willi Hennig's rules on cladistics ""have generally been helpful"", his progression rule is ""suspect"". List of biological rules Allen's rule states that the body shapes and proportions of endotherms vary by climatic temperature by either minimizing exposed surface area to minimize heat loss in cold climates or maximizing ex" https://en.wikipedia.org/wiki/Mathemagician,"A mathemagician is a mathematician who is also a magician. The term ""mathemagic"" is believed to have been introduced by Royal Vale Heath with his 1933 book ""Mathemagic"". The name ""mathemagician"" was probably first applied to Martin Gardner, but has since been used to describe many mathematician/magicians, including Arthur T. Benjamin, Persi Diaconis, and Colm Mulcahy. Diaconis has suggested that the reason so many mathematicians are magicians is that ""inventing a magic trick and inventing a theorem are very similar activities."" Mathemagician is a neologism, specifically a portmanteau, that combines mathematician and magician. A great number of self-working mentalism tricks rely on mathematical principles. Max Maven often utilizes this type of magic in his performance. The Mathemagician is the name of a character in the 1961 children's book The Phantom Tollbooth. He is the ruler of Digitopolis, the kingdom of mathematics. Notable mathemagicians Arthur T. Benjamin Jin Akiyama Persi Diaconis Richard Feynman Karl Fulves Martin Gardner Ronald Graham Royal Vale Heath Colm Mulcahy Raymond Smullyan W. W. Rouse Ball Alex Elmsley" https://en.wikipedia.org/wiki/Glossary%20of%20power%20electronics,"This glossary of power electronics is a list of definitions of terms and concepts related to power electronics in general and power electronic capacitors in particular. For more definitions in electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering. The glossary terms fit in the following categories in power electronics: Electronic power converters; converters, rectifiers, inverters, filters. Electronic power switches and electronic AC power converters; switches and controllers. Essential components of electric power equipment; device, stack, assembly, reactor, capacitor, transformer, AC filter, DC filter, snubber circuit. Circuits and circuit elements of power electronic equipment; arms and connections. Operations within power electronic equipment; commutations, quenchings, controls, angles, factors, states, directions, intervals, periods, frequencies, voltages, breakthroughs and failures, breakdowns, blocking and flows. Properties of power electronic equipment Characteristic curves of power electronic equipment Power supplies A B C D E F H I J L M N O P Q R S T U V Overview of electronic power converters See also Glossary of engineering Glossary of civil engineering Glossary of mechanical engineering Glossary of structural engineering Notes" https://en.wikipedia.org/wiki/Index%20set,"In mathematics, an index set is a set whose members label (or index) members of another set. For instance, if the elements of a set may be indexed or labeled by means of the elements of a set , then is an index set. The indexing consists of a surjective function from onto , and the indexed collection is typically called an indexed family, often written as . Examples An enumeration of a set gives an index set , where is the particular enumeration of . Any countably infinite set can be (injectively) indexed by the set of natural numbers . For , the indicator function on is the function given by The set of all such indicator functions, , is an uncountable set indexed by . Other uses In computational complexity theory and cryptography, an index set is a set for which there exists an algorithm that can sample the set efficiently; e.g., on input , can efficiently select a poly(n)-bit long element from the set. See also Friendly-index set" https://en.wikipedia.org/wiki/Phototroph,"Phototrophs () are organisms that carry out photon capture to produce complex organic compounds (e.g. carbohydrates) and acquire energy. They use the energy from light to carry out various cellular metabolic processes. It is a common misconception that phototrophs are obligatorily photosynthetic. Many, but not all, phototrophs often photosynthesize: they anabolically convert carbon dioxide into organic material to be utilized structurally, functionally, or as a source for later catabolic processes (e.g. in the form of starches, sugars and fats). All phototrophs either use electron transport chains or direct proton pumping to establish an electrochemical gradient which is utilized by ATP synthase, to provide the molecular energy currency for the cell. Phototrophs can be either autotrophs or heterotrophs. If their electron and hydrogen donors are inorganic compounds (e.g., , as in some purple sulfur bacteria, or , as in some green sulfur bacteria) they can be also called lithotrophs, and so, some photoautotrophs are also called photolithoautotrophs. Examples of phototroph organisms are Rhodobacter capsulatus, Chromatium, and Chlorobium. History Originally used with a different meaning, the term took its current definition after Lwoff and collaborators (1946). Photoautotroph Most of the well-recognized phototrophs are autotrophic, also known as photoautotrophs, and can fix carbon. They can be contrasted with chemotrophs that obtain their energy by the oxidation of electron donors in their environments. Photoautotrophs are capable of synthesizing their own food from inorganic substances using light as an energy source. Green plants and photosynthetic bacteria are photoautotrophs. Photoautotrophic organisms are sometimes referred to as holophytic. Oxygenic photosynthetic organisms use chlorophyll for light-energy capture and oxidize water, ""splitting"" it into molecular oxygen. Ecology In an ecological context, phototrophs are often the food source for neighboring he" https://en.wikipedia.org/wiki/Blue%20team%20%28computer%20security%29,"A blue team is a group of individuals who perform an analysis of information systems to ensure security, identify security flaws, verify the effectiveness of each security measure, and to make certain all security measures will continue to be effective after implementation. History As part of the United States computer security defense initiative, red teams were developed to exploit other malicious entities that would do them harm. As a result, blue teams were developed to design defensive measures against such red team activities. Incident response If an incident does occur within the organization, the blue team will perform the following six steps to handle the situation: Preparation Identification Containment Eradication Recovery Lessons learned Operating system hardening In preparation for a computer security incident, the blue team will perform hardening techniques on all operating systems throughout the organization. Perimeter defense The blue team must always be mindful of the network perimeter, including traffic flow, packet filtering, proxy firewalls, and intrusion detection systems. Tools Blue teams employ a wide range of tools allowing them to detect an attack, collect forensic data, perform data analysis and make changes to threat future attacks and mitigate threats. The tools include: Log management and analysis AlienVault FortiSIEM (a.k.a. AccelOps) Graylog InTrust LogRhythm Microsoft Sentinel NetWitness Qradar (IBM) Rapid7 SIEMonster SolarWinds Splunk Security information and event management (SIEM) technology SIEM software supports threat detection and security incident response by performing real-time data collection and analysis of security events. This type of software also uses data sources outside of the network including indicators of compromise (IoC) threat intelligence. See also List of digital forensics tools Vulnerability management White hat (computer security) Red team" https://en.wikipedia.org/wiki/List%20of%20eponyms%20of%20special%20functions,"This is a list of special function eponyms in mathematics, to cover the theory of special functions, the differential equations they satisfy, named differential operators of the theory (but not intended to include every mathematical eponym). Named symmetric functions, and other special polynomials, are included. A Niels Abel: Abel polynomials - Abelian function - Abel–Gontscharoff interpolating polynomial Sir George Biddell Airy: Airy function Waleed Al-Salam (1926–1996): Al-Salam polynomial - Al Salam–Carlitz polynomial - Al Salam–Chihara polynomial C. T. Anger: Anger–Weber function Kazuhiko Aomoto: Aomoto–Gel'fand hypergeometric function - Aomoto integral Paul Émile Appell (1855–1930): Appell hypergeometric series, Appell polynomial, Generalized Appell polynomials Richard Askey: Askey–Wilson polynomial, Askey–Wilson function (with James A. Wilson) B Ernest William Barnes: Barnes G-function E. T. Bell: Bell polynomials Bender–Dunne polynomial Jacob Bernoulli: Bernoulli polynomial Friedrich Bessel: Bessel function, Bessel–Clifford function H. Blasius: Blasius functions R. P. Boas, R. C. Buck: Boas–Buck polynomial Böhmer integral Erland Samuel Bring: Bring radical de Bruijn function Buchstab function Burchnall, Chaundy: Burchnall–Chaundy polynomial C Leonard Carlitz: Carlitz polynomial Arthur Cayley, Capelli: Cayley–Capelli operator Celine's polynomial Charlier polynomial Pafnuty Chebyshev: Chebyshev polynomials Elwin Bruno Christoffel, Darboux: Christoffel–Darboux relation Cyclotomic polynomials D H. G. Dawson: Dawson function Charles F. Dunkl: Dunkl operator, Jacobi–Dunkl operator, Dunkl–Cherednik operator Dickman–de Bruijn function E Engel: Engel expansion Erdélyi Artúr: Erdelyi–Kober operator Leonhard Euler: Euler polynomial, Eulerian integral, Euler hypergeometric integral F V. N. Faddeeva: Faddeeva function (also known as the complex error function; see error function) G C. F. Gauss: Gaussian polynomial, Gaussian distribution, etc. Leopold Bernhar" https://en.wikipedia.org/wiki/Representation%20theorem,"In mathematics, a representation theorem is a theorem that states that every abstract structure with certain properties is isomorphic to another (abstract or concrete) structure. Examples Algebra Cayley's theorem states that every group is isomorphic to a permutation group. Representation theory studies properties of abstract groups via their representations as linear transformations of vector spaces. Stone's representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to a field of sets. A variant, Stone's representation theorem for distributive lattices, states that every distributive lattice is isomorphic to a sublattice of the power set lattice of some set. Another variant, Stone's duality, states that there exists a duality (in the sense of an arrow-reversing equivalence) between the categories of Boolean algebras and that of Stone spaces. The Poincaré–Birkhoff–Witt theorem states that every Lie algebra embeds into the commutator Lie algebra of its universal enveloping algebra. Ado's theorem states that every finite-dimensional Lie algebra over a field of characteristic zero embeds into the Lie algebra of endomorphisms of some finite-dimensional vector space. Birkhoff's HSP theorem states that every model of an algebra A is the homomorphic image of a subalgebra of a direct product of copies of A. In the study of semigroups, the Wagner–Preston theorem provides a representation of an inverse semigroup S, as a homomorphic image of the set of partial bijections on S, and the semigroup operation given by composition. Category theory The Yoneda lemma provides a full and faithful limit-preserving embedding of any category into a category of presheaves. Mitchell's embedding theorem for abelian categories realises every small abelian category as a full (and exactly embedded) subcategory of a category of modules over some ring. Mostowski's collapsing theorem states that every well-founded extensional structure is isomorphic t" https://en.wikipedia.org/wiki/List%20of%20random%20number%20generators,"Random number generators are important in many kinds of technical applications, including physics, engineering or mathematical computer studies (e.g., Monte Carlo simulations), cryptography and gambling (on game servers). This list includes many common types, regardless of quality or applicability to a given use case. Pseudorandom number generators (PRNGs) The following algorithms are pseudorandom number generators. Cryptographic algorithms Cipher algorithms and cryptographic hashes can be used as very high-quality pseudorandom number generators. However, generally they are considerably slower (typically by a factor 2–10) than fast, non-cryptographic random number generators. These include: Stream ciphers. Popular choices are Salsa20 or ChaCha (often with the number of rounds reduced to 8 for speed), ISAAC, HC-128 and RC4. Block ciphers in counter mode. Common choices are AES (which is very fast on systems supporting it in hardware), TwoFish, Serpent and Camellia. Cryptographic hash functions A few cryptographically secure pseudorandom number generators do not rely on cipher algorithms but try to link mathematically the difficulty of distinguishing their output from a `true' random stream to a computationally difficult problem. These approaches are theoretically important but are too slow to be practical in most applications. They include: Blum–Micali algorithm (1984) Blum Blum Shub (1986) Naor–Reingold pseudorandom function (1997) Random number generators that use external entropy These approaches combine a pseudo-random number generator (often in the form of a block or stream cipher) with an external source of randomness (e.g., mouse movements, delay between keyboard presses etc.). /dev/random – Unix-like systems CryptGenRandom – Microsoft Windows Fortuna RDRAND instructions (called Intel Secure Key by Intel), available in Intel x86 CPUs since 2012. They use the AES generator built into the CPU, reseeding it periodically. True Random Number Gen" https://en.wikipedia.org/wiki/Complete%20set%20of%20invariants,"In mathematics, a complete set of invariants for a classification problem is a collection of maps (where is the collection of objects being classified, up to some equivalence relation , and the are some sets), such that if and only if for all . In words, such that two objects are equivalent if and only if all invariants are equal. Symbolically, a complete set of invariants is a collection of maps such that is injective. As invariants are, by definition, equal on equivalent objects, equality of invariants is a necessary condition for equivalence; a complete set of invariants is a set such that equality of these is also sufficient for equivalence. In the context of a group action, this may be stated as: invariants are functions of coinvariants (equivalence classes, orbits), and a complete set of invariants characterizes the coinvariants (is a set of defining equations for the coinvariants). Examples In the classification of two-dimensional closed manifolds, Euler characteristic (or genus) and orientability are a complete set of invariants. Jordan normal form of a matrix is a complete invariant for matrices up to conjugation, but eigenvalues (with multiplicities) are not. Realizability of invariants A complete set of invariants does not immediately yield a classification theorem: not all combinations of invariants may be realized. Symbolically, one must also determine the image of" https://en.wikipedia.org/wiki/Registered%20state%20change%20notification,"In Fibre Channel protocol, a registered state change notification (RSCN) is a Fibre Channel fabric's notification sent to all specified nodes in case of any major fabric changes. This allows nodes to immediately gain knowledge about the fabric and react accordingly. Overview Implementation of this function is obligatory for each Fibre Channel switch, but is optional for a node. This function belongs to a second level of the protocol, or FC2. Some events that trigger notifications are: Nodes joining or leaving the fabric (most common usage) Switches joining or leaving the fabric Changing the switch name The nodes wishing to be notified in such way need to register themselves first at the Fabric Controller, which is a standardized FC virtual address present at each switch. RSCN and zoning If a fabric has some zones configured for additional security, notifications do not cross zone boundaries if not needed. Simply, there is no need to notify a node about a change that it cannot see anyway (because it happened in a separate zone). Example For example, let's assume there is a fabric with just one node, namely a server's FC-compatible HBA. First it registers itself for notifications. Then a human administrator connects another node, like a disk array, to the fabric. This event is known at first only to a single switch, the one that detected one of its ports going online. The switch, however, has a list of registered nodes (currently containing only the HBA node) and notifies every one of them. As the HBA receives the notification, it chooses to query the nearest switch about current list of nodes. It detects a new disk array and starts to communicate with it on a SCSI level, asking for a list of SCSI LUNs. Then it notifies a server's operating system, that there is a new SCSI target containing some LUNs. The operating system auto-configures those as new block devices, ready for use. See also Storage area network Fibre Channel Fibre Channel fabric Fibre Chann" https://en.wikipedia.org/wiki/List%20of%20plasma%20physics%20articles,"This is a list of plasma physics topics. A Ablation Abradable coating Abraham–Lorentz force Absorption band Accretion disk Active galactic nucleus Adiabatic invariant ADITYA (tokamak) Aeronomy Afterglow plasma Airglow Air plasma, Corona treatment, Atmospheric-pressure plasma treatment Ayaks, Novel ""Magneto-plasmo-chemical engine"" Alcator C-Mod Alfvén wave Ambipolar diffusion Aneutronic fusion Anisothermal plasma Anisotropy Antiproton Decelerator Appleton-Hartree equation Arcing horns Arc lamp Arc suppression ASDEX Upgrade, Axially Symmetric Divertor EXperiment Astron (fusion reactor) Astronomy Astrophysical plasma Astrophysical X-ray source Atmospheric dynamo Atmospheric escape Atmospheric pressure discharge Atmospheric-pressure plasma Atom Atomic emission spectroscopy Atomic physics Atomic-terrace low-angle shadowing Auger electron spectroscopy Aurora (astronomy) B Babcock Model Ball lightning Ball-pen probe Ballooning instability Baryon acoustic oscillations Beam-powered propulsion Beta (plasma physics) Birkeland current Blacklight Power Blazar Bohm diffusion Bohr–van Leeuwen theorem Boltzmann relation Bow shock Bremsstrahlung Bussard ramjet C Capacitively coupled plasma Carbon nanotube metal matrix composites Cassini–Huygens, Cassini Plasma Spectrometer Cathode ray Cathodic arc deposition Ceramic discharge metal-halide lamp Charge carrier Charged-device model Charged particle Chemical plasma Chemical vapor deposition Chemical vapor deposition of diamond Chirikov criterion Chirped pulse amplification Chromatography detector Chromo–Weibel instability Classical-map hypernetted-chain method Cnoidal wave Colored-particle-in-cell Coilgun Cold plasma, Ozone generator Collisionality Colored-particle-in-cell Columbia Non-neutral Torus Comet tail Compact toroid Compressibility Compton–Getting effect Contact lithography Coupling (physics) Convection cell Cooling flow Corona Corona di" https://en.wikipedia.org/wiki/Law%20of%20large%20numbers,"In probability theory, the law of large numbers (LLN) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be ""balanced"" by the others (see the gambler's fallacy). The LLN only applies to the average. Therefore, while other formulas that look similar are not verified, such as the raw deviation from ""theoretical results"": not only does it not converge toward zero as n increases, but it tends to increase in absolute value as n increases. Examples For example, a single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the average of the rolls is: According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, a" https://en.wikipedia.org/wiki/Isotricha,"Isotricha is a genus of protozoa (single-celled organisms) which are commensals of the rumen of ruminant animals. They are approximately long. Species include: Isotricha intestinalis Stein 1858 Isotricha prostoma Stein 1858" https://en.wikipedia.org/wiki/2.5D%20integrated%20circuit,"A 2.5D integrated circuit (2.5D IC) is an advanced packaging technique that combines multiple integrated circuit dies in a single package without stacking them into a three-dimensional integrated circuit (3D-IC) with through-silicon vias (TSVs). The term ""2.5D"" originated when 3D-ICs with TSVs were quite new and still very difficult. Chip designers realized that many of the advantages of 3D integration could be approximated by placing bare dies side by side on an interposer instead of stacking them vertically. If the pitch is very fine and the interconnect very short, the assembly can be packaged as a single component with better size, weight, and power characteristics than a comparable 2D circuit board assembly. This half-way 3D integration was facetiously named ""2.5D"" and the name stuck. Since then, 2.5D has proven to be far more than just ""half-way to 3D."" Some benefits: An interposer can support heterogeneous integration – that is, dies of different pitch, size, material, and process node. Placing dies side by side instead of stacking them reduces heat buildup. Upgrading or modifying a 2.5D assembly is as easy as swapping in a new component and revamping the interposer to suit; much faster and simpler than reworking an entire 3D-IC or System-on-Chip (SoC). Some sophisticated 2.5D assemblies even incorporate TSVs and 3D components. Several foundries now support 2.5D packaging. The success of 2.5D assembly has given rise to ""chiplets"" – small, functional circuit blocks designed to be combined in mix-and-match fashion on interposers. Several high-end products already take advantage of these LEGO-style chiplets; some experts predict the emergence of an industry-wide chiplet ecosystem." https://en.wikipedia.org/wiki/Symmetry%20%28physics%29,"In physics, a symmetry of a physical system is a physical or mathematical feature of the system (observed or intrinsic) that is preserved or remains unchanged under some transformation. A family of particular transformations may be continuous (such as rotation of a circle) or discrete (e.g., reflection of a bilaterally symmetric figure, or rotation of a regular polygon). Continuous and discrete transformations give rise to corresponding types of symmetries. Continuous symmetries can be described by Lie groups while discrete symmetries are described by finite groups (see Symmetry group). These two concepts, Lie and finite groups, are the foundation for the fundamental theories of modern physics. Symmetries are frequently amenable to mathematical formulations such as group representations and can, in addition, be exploited to simplify many problems. Arguably the most important example of a symmetry in physics is that the speed of light has the same value in all frames of reference, which is described in special relativity by a group of transformations of the spacetime known as the Poincaré group. Another important example is the invariance of the form of physical laws under arbitrary differentiable coordinate transformations, which is an important idea in general relativity. As a kind of invariance Invariance is specified mathematically by transformations that leave some property (e.g. quantity) unchanged. This idea can apply to basic real-world observations. For example, temperature may be homogeneous throughout a room. Since the temperature does not depend on the position of an observer within the room, we say that the temperature is invariant under a shift in an observer's position within the room. Similarly, a uniform sphere rotated about its center will appear exactly as it did before the rotation. The sphere is said to exhibit spherical symmetry. A rotation about any axis of the sphere will preserve how the sphere ""looks"". Invariance in force The above " https://en.wikipedia.org/wiki/List%20of%20rules%20of%20inference,"This is a list of rules of inference, logical laws that relate to mathematical formulae. Introduction Rules of inference are syntactical transform rules which one can use to infer a conclusion from a premise to create an argument. A set of rules can be used to infer any valid conclusion if it is complete, while never inferring an invalid conclusion, if it is sound. A sound and complete set of rules need not include every rule in the following list, as many of the rules are redundant, and can be proven with the other rules. Discharge rules permit inference from a subderivation based on a temporary assumption. Below, the notation indicates such a subderivation from the temporary assumption to . Rules for propositional calculus Rules for negations Reductio ad absurdum (or Negation Introduction) Reductio ad absurdum (related to the law of excluded middle) Ex contradictione quodlibet Rules for conditionals Deduction theorem (or Conditional Introduction) Modus ponens (or Conditional Elimination) Modus tollens Rules for conjunctions Adjunction (or Conjunction Introduction) Simplification (or Conjunction Elimination) Rules for disjunctions Addition (or Disjunction Introduction) Case analysis (or Proof by Cases or Argument by Cases or Disjunction elimination) Disjunctive syllogism Constructive dilemma Rules for biconditionals Biconditional introduction Biconditional elimination Rules of classical predicate calculus In the following rules, is exactly like except for having the term wherever has the free variable . Universal Generalization (or Universal Introduction) Restriction 1: is a variable which does not occur in . Restriction 2: is not mentioned in any hypothesis or undischarged assumptions. Universal Instantiation (or Universal Elimination) Restriction: No free occurrence of in falls within the scope of a quantifier quantifying a variable occurring in . " https://en.wikipedia.org/wiki/Territory%20%28animal%29,"In ethology, territory is the sociographical area that an animal consistently defends against conspecific competition (or, occasionally, against animals of other species) using agonistic behaviors or (less commonly) real physical aggression. Animals that actively defend territories in this way are referred to as being territorial or displaying territorialism. Territoriality is only shown by a minority of species. More commonly, an individual or a group of animals occupies an area that it habitually uses but does not necessarily defend; this is called its home range. The home ranges of different groups of animals often overlap, and in these overlap areas the groups tend to avoid each other rather than seeking to confront and expel each other. Within the home range there may be a core area that no other individual group uses, but, again, this is as a result of avoidance. Function The ultimate function of animals inhabiting and defending a territory is to increase the individual fitness or inclusive fitness of the animals expressing the behaviour. Fitness in this biological sense relates to the ability of an animal to survive and raise young. The proximate functions of territory defense vary. For some animals, the reason for such protective behaviour is to acquire and protect food sources, nesting sites, mating areas, or to attract a mate. Types and size Among birds, territories have been classified as six types. Type A: An 'all-purpose territory' in which all activities occur, e.g. courtship, mating, nesting and foraging Type B: A mating and nesting territory, not including most of the area used for foraging. Type C: A nesting territory which includes the nest plus a small area around it. Common in colonial waterbirds. Type D: A pairing and mating territory. The type of territory defended by males in lekking species. Type E: Roosting territory. Type F: Winter territory which typically includes foraging areas and roost sites. May be equivalent (in terms of locat" https://en.wikipedia.org/wiki/Simple-As-Possible%20computer,"The Simple-As-Possible (SAP) computer is a simplified computer architecture designed for educational purposes and described in the book Digital Computer Electronics by Albert Paul Malvino and Jerald A. Brown. The SAP architecture serves as an example in Digital Computer Electronics for building and analyzing complex logical systems with digital electronics. Digital Computer Electronics successively develops three versions of this computer, designated as SAP-1, SAP-2, and SAP-3. Each of the last two build upon the immediate previous version by adding additional computational, flow of control, and input/output capabilities. SAP-2 and SAP-3 are fully Turing-complete. The instruction set architecture (ISA) that the computer final version (SAP-3) is designed to implement is patterned after and upward compatible with the ISA of the Intel 8080/8085 microprocessor family. Therefore, the instructions implemented in the three SAP computer variations are, in each case, a subset of the 8080/8085 instructions. Variant Ben Eater's Design YouTuber and former Khan Academy employee Ben Eater created a tutorial building an 8-bit Turing-complete SAP computer on breadboards from logical chips (7400-series) capable of running simple programs such as computing the Fibonacci sequence. Eater's design consists of the following modules: An adjustable-speed (upper limitation of a few hundred Hertz) clock module that can be put into a ""manual mode"" to step through the clock cycles. Three register modules (Register A, Register B, and the Instruction Register) that ""store small amounts of data that the CPU is processing."" An arithmetic logic unit (ALU) capable of adding and subtracting 8-bit 2's complement integers from registers A and B. This module also has a flags register with two possible flags (Z and C). Z stands for ""zero,"" and is activated if the ALU outputs zero. C stands for ""carry,"" and is activated if the ALU produces a carry-out bit. A RAM module capable of storing 16 b" https://en.wikipedia.org/wiki/Haplotype%20block,"In genetics, a haplotype block is a region of an organism's genome in which there is little evidence of a history of genetic recombination, and which contain only a small number of distinct haplotypes. According to the haplotype-block model, such blocks should show high levels of linkage disequilibrium and be separated from one another by numerous recombination events. The boundaries of haplotype blocks cannot be directly observed; they must instead be inferred indirectly through the use of algorithms. However, some evidence suggests that different algorithms for identifying haplotype blocks give very different results when used on the same data, though another study suggests that their results are generally consistent. The National Institutes of Health funded the HapMap project to catalog haplotype blocks throughout the human genome. Definition There are two main ways that the term ""haplotype block"" is defined: one based on whether a given genomic sequence displays higher linkage disequilibrium than a predetermined threshold, and one based on whether the sequence consists of a minimum number of single nucleotide polymorphisms (SNPs) that explain a majority of the common haplotypes in the sequence (or a lower-than-usual number of unique haplotypes). In 2001, Patil et al. proposed the following definition of the term: ""Suppose we have a number of haplotypes consisting of a set of consecutive SNPs. A segment of consecutive SNPs is a block if at least α percent of haplotypes are represented more than once""." https://en.wikipedia.org/wiki/Seshadri%20constant,"In algebraic geometry, a Seshadri constant is an invariant of an ample line bundle L at a point P on an algebraic variety. It was introduced by Demailly to measure a certain rate of growth, of the tensor powers of L, in terms of the jets of the sections of the Lk. The object was the study of the Fujita conjecture. The name is in honour of the Indian mathematician C. S. Seshadri. It is known that Nagata's conjecture on algebraic curves is equivalent to the assertion that for more than nine general points, the Seshadri constants of the projective plane are maximal. There is a general conjecture for algebraic surfaces, the Nagata–Biran conjecture. Definition Let be a smooth projective variety, an ample line bundle on it, a point of , = { all irreducible curves passing through }. . Here, denotes the intersection number of and , measures how many times passing through . Definition: One says that is the Seshadri constant of at the point , a real number. When is an abelian variety, it can be shown that is independent of the point chosen, and it is written simply ." https://en.wikipedia.org/wiki/Nullor,"A nullor is a theoretical two-port network consisting of a nullator at its input and a norator at its output. Nullors represent an ideal amplifier, having infinite current, voltage, transconductance and transimpedance gain. Its transmission parameters are all zero, that is, its input–output behavior is summarized with the matrix equation In negative-feedback circuits, the circuit surrounding the nullor determines the nullor output in such a way as to force the nullor input to zero. Inserting a nullor in a circuit schematic imposes mathematical constraints on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements are needed to meet the conditions. For example, an ideal operational amplifier can be modeled using a nullor, and the textbook analysis of a feedback circuit using an ideal op-amp uses the mathematical conditions imposed by the nullor to analyze the circuit surrounding the op-amp. Example: voltage-controlled current sink Figure 1 shows a voltage-controlled current sink. The sink is intended to draw the same current iOUT regardless of the applied voltage VCC at the output. The value of current drawn is to be set by the input voltage vIN. Here the sink is to be analyzed by idealizing the op amp as a nullor. Using properties of the input nullator portion of the nullor, the input voltage across the op amp input terminals is zero. Consequently, the voltage across reference resistor RR is the applied voltage vIN, making the current in RR simply vIN/RR. Again using the nullator properties, the input current to the nullor is zero. Consequently, Kirchhoff's current law at the emitter provides an emitter current of vIN/RR. Using properties of the norator output portion of the nullor, the nullor provides whatever current is demanded of it, regardless of the voltage at its output. In this case, it provides the transistor base current iB. Thus, Kirchhoff's current law applied to the transistor as a whole provides the output current " https://en.wikipedia.org/wiki/Heterotroph,"A heterotroph (; ) is an organism that cannot produce its own food, instead taking nutrition from other sources of organic carbon, mainly plant or animal matter. In the food chain, heterotrophs are primary, secondary and tertiary consumers, but not producers. Living organisms that are heterotrophic include all animals and fungi, some bacteria and protists, and many parasitic plants. The term heterotroph arose in microbiology in 1946 as part of a classification of microorganisms based on their type of nutrition. The term is now used in many fields, such as ecology in describing the food chain. Heterotrophs may be subdivided according to their energy source. If the heterotroph uses chemical energy, it is a chemoheterotroph (e.g., humans and mushrooms). If it uses light for energy, then it is a photoheterotroph (e.g., green non-sulfur bacteria). Heterotrophs represent one of the two mechanisms of nutrition (trophic levels), the other being autotrophs (auto = self, troph = nutrition). Autotrophs use energy from sunlight (photoautotrophs) or oxidation of inorganic compounds (lithoautotrophs) to convert inorganic carbon dioxide to organic carbon compounds and energy to sustain their life. Comparing the two in basic terms, heterotrophs (such as animals) eat either autotrophs (such as plants) or other heterotrophs, or both. Detritivores are heterotrophs which obtain nutrients by consuming detritus (decomposing plant and animal parts as well as feces). Saprotrophs (also called lysotrophs) are chemoheterotrophs that use extracellular digestion in processing decayed organic matter. The process is most often facilitated through the active transport of such materials through endocytosis within the internal mycelium and its constituent hyphae. Types Heterotrophs can be organotrophs or lithotrophs. Organotrophs exploit reduced carbon compounds as electron sources, like carbohydrates, fats, and proteins from plants and animals. On the other hand, lithoheterotrophs use inorgan" https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation,"Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word ""Bracket"". Quantum mechanics In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct ""bras"" and ""kets"". A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system. A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as . Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors form a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections on the state , to find how linearly dependent two states are, etc. For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplic" https://en.wikipedia.org/wiki/List%20of%20lemmas,"This following is a list of lemmas (or, ""lemmata"", i.e. minor theorems, or sometimes intermediate technical results factored out of proofs). See also list of axioms, list of theorems and list of conjectures. Algebra Abhyankar's lemma Aubin–Lions lemma Bergman's diamond lemma Fitting lemma Injective test lemma Hua's lemma (exponential sums) Krull's separation lemma Schanuel's lemma (projective modules) Schwartz–Zippel lemma Shapiro's lemma Stewart–Walker lemma (tensors) Whitehead's lemma (Lie algebras) Zariski's lemma Algebraic geometry Abhyankar's lemma Fundamental lemma (Langlands program) Category theory Five lemma Horseshoe lemma Nine lemma Short five lemma Snake lemma Splitting lemma Linear algebra Matrix determinant lemma Matrix inversion lemma Group theory Burnside's lemma also known as the Cauchy–Frobenius lemma Frattini's lemma (finite groups) Goursat's lemma Mautner's lemma (representation theory) Ping-pong lemma (geometric group theory) Schreier's subgroup lemma Schur's lemma (representation theory) Zassenhaus lemma Polynomials Gauss's lemma (polynomials) Schwartz–Zippel lemma Ring theory and commutative algebra Artin–Rees lemma Hensel's lemma (commutative rings) Nakayama lemma Noether's normalization lemma Prime avoidance lemma Universal algebra Jónsson's lemma Analysis Fekete's lemma Fundamental lemma of calculus of variations Hopf lemma Sard's lemma (singularity theory) Stechkin's lemma (functional and numerical analysis) Vitali covering lemma (real analysis) Watson's lemma Complex analysis Estimation lemma (contour integrals) Hartogs's lemma (several complex variables) Jordan's lemma Lemma on the Logarithmic derivative Schwarz lemma Fourier analysis Riemann–Lebesgue lemma Differential equations Borel's lemma (partial differential equations) Grönwall's lemma Lax–Milgram lemma Pugh's closing lemma Weyl's lemma (Laplace equation) (partial differential equations) Differential forms Poincaré lemma of closed and exa" https://en.wikipedia.org/wiki/List%20of%20prime%20knots,"In knot theory, prime knots are those knots that are indecomposable under the operation of knot sum. The prime knots with ten or fewer crossings are listed here for quick comparison of their properties and varied naming schemes. Table of prime knots Six or fewer crossings Seven crossings Eight crossings Nine crossings Ten crossings Higher Conway knot 11n34 Kinoshita–Terasaka knot 11n42 Table of prime links Seven or fewer crossings Higher See also List of knots List of mathematical knots and links Knot tabulation (−2,3,7) pretzel knot Notes External links ""KnotInfo"", Indiana.edu. Knot theory Mathematics-related lists" https://en.wikipedia.org/wiki/Of%20the%20form,"In mathematics, the phrase ""of the form"" indicates that a mathematical object, or (more frequently) a collection of objects, follows a certain pattern of expression. It is frequently used to reduce the formality of mathematical proofs. Example of use Here is a proof which should be appreciable with limited mathematical background: Statement: The product of any two even natural numbers is also even. Proof: Any even natural number is of the form 2n, where n is a natural number. Therefore, let us assume that we have two even numbers which we will denote by 2k and 2l. Their product is (2k)(2l) = 4(kl) = 2(2kl). Since 2kl is also a natural number, the product is even. Note: In this case, both exhaustivity and exclusivity were needed. That is, it was not only necessary that every even number is of the form 2n (exhaustivity), but also that every expression of the form 2n is an even number (exclusivity). This will not be the case in every proof, but normally, at least exhaustivity is implied by the phrase of the form." https://en.wikipedia.org/wiki/Iddq%20testing,"Iddq testing is a method for testing CMOS integrated circuits for the presence of manufacturing faults. It relies on measuring the supply current (Idd) in the quiescent state (when the circuit is not switching and inputs are held at static values). The current consumed in the state is commonly called Iddq for Idd (quiescent) and hence the name. Iddq testing uses the principle that in a correctly operating quiescent CMOS digital circuit, there is no static current path between the power supply and ground, except for a small amount of leakage. Many common semiconductor manufacturing faults will cause the current to increase by orders of magnitude, which can be easily detected. This has the advantage of checking the chip for many possible faults with one measurement. Another advantage is that it may catch faults that are not found by conventional stuck-at fault test vectors. Iddq testing is somewhat more complex than just measuring the supply current. If a line is shorted to Vdd, for example, it will still draw no extra current if the gate driving the signal is attempting to set it to '1'. However, a different input that attempts to set the signal to 0 will show a large increase in quiescent current, signalling a bad part. Typical Iddq tests may use 20 or so inputs. Note that Iddq test inputs require only controllability, and not observability. This is because the observability is through the shared power supply connection. Advantages and disadvantages Iddq testing has many advantages: It is a simple and direct test that can identify physical defects. The area and design time overhead are very low. Test generation is fast. Test application time is fast since the vector sets are small. It catches some defects that other tests, particularly stuck-at logic tests, do not. Drawback: Compared to scan chain testing, Iddq testing is time consuming, and thus more expensive, as is achieved by current measurements that take much more time than reading digital pins i" https://en.wikipedia.org/wiki/Landau%E2%80%93Ramanujan%20constant,"In mathematics and the field of number theory, the Landau–Ramanujan constant is the positive real number b that occurs in a theorem proved by Edmund Landau in 1908, stating that for large , the number of positive integers below that are the sum of two square numbers behaves asymptotically as This constant b was rediscovered in 1913 by Srinivasa Ramanujan, in the first letter he wrote to G.H. Hardy. Sums of two squares By the sum of two squares theorem, the numbers that can be expressed as a sum of two squares of integers are the ones for which each prime number congruent to 3 mod 4 appears with an even exponent in their prime factorization. For instance, 45 = 9 + 36 is a sum of two squares; in its prime factorization, 32 × 5, the prime 3 appears with an even exponent, and the prime 5 is congruent to 1 mod 4, so its exponent can be odd. Landau's theorem states that if is the number of positive integers less than that are the sum of two squares, then , where is the Landau–Ramanujan constant. The Landau-Ramanujan constant can also be written as an infinite product: History This constant was stated by Landau in the limit form above; Ramanujan instead approximated as an integral, with the same constant of proportionality, and with a slowly growing error term." https://en.wikipedia.org/wiki/Orthomorphism,"In abstract algebra, an orthomorphism is a certain kind of mapping from a group into itself. Let G be a group, and let θ be a permutation of G. Then θ is an orthomorphism of G if the mapping f defined by f(x) = x−1 θ(x) is also a permutation of G. A permutation φ of G is a complete mapping if the mapping g defined by g(x) = xφ(x) is also a permutation of G. Orthomorphisms and complete mappings are closely related." https://en.wikipedia.org/wiki/Intraguild%20predation,"Intraguild predation, or IGP, is the killing and sometimes eating of a potential competitor of a different species. This interaction represents a combination of predation and competition, because both species rely on the same prey resources and also benefit from preying upon one another. Intraguild predation is common in nature and can be asymmetrical, in which one species feeds upon the other, or symmetrical, in which both species prey upon each other. Because the dominant intraguild predator gains the dual benefits of feeding and eliminating a potential competitor, IGP interactions can have considerable effects on the structure of ecological communities. Types Intraguild predation can be classified as asymmetrical or symmetrical. In asymmetrical interactions one species consistently preys upon the other, while in symmetrical interactions both species prey equally upon each other. Intraguild predation can also be age structured, in which case the vulnerability of a species to predation is dependent on age and size, so only juveniles or smaller individuals of one of the predators are fed upon by the other. A wide variety of predatory relationships are possible depending on the symmetry of the interaction and the importance of age structure. IGP interactions can range from predators incidentally eating parasites attached to their prey to direct predation between two apex predators. Ecology of intraguild predation Intraguild predation is common in nature and widespread across communities and ecosystems. Intraguild predators must share at least one prey species and usually occupy the same trophic guild, and the degree of IGP depends on factors such as the size, growth, and population density of the predators, as well as the population density and behavior of their shared prey. When creating theoretical models for intraguild predation, the competing species are classified as the ""top predator"" or the ""intermediate predator,"" (the species more likely to be pre" https://en.wikipedia.org/wiki/Electronic%20design%20automation,"Electronic design automation (EDA), also referred to as electronic computer-aided design (ECAD), is a category of software tools for designing electronic systems such as integrated circuits and printed circuit boards. The tools work together in a design flow that chip designers use to design and analyze entire semiconductor chips. Since a modern semiconductor chip can have billions of components, EDA tools are essential for their design; this article in particular describes EDA specifically with respect to integrated circuits (ICs). History Early days The earliest electronic design automation is attributed to IBM with the documentation of its 700 series computers in the 1950s. Prior to the development of EDA, integrated circuits were designed by hand and manually laid out. Some advanced shops used geometric software to generate tapes for a Gerber photoplotter, responsible for generating a monochromatic exposure image, but even those copied digital recordings of mechanically drawn components. The process was fundamentally graphic, with the translation from electronics to graphics done manually; the best-known company from this era was Calma, whose GDSII format is still in use today. By the mid-1970s, developers started to automate circuit design in addition to drafting and the first placement and routing tools were developed; as this occurred, the proceedings of the Design Automation Conference catalogued the large majority of the developments of the time. The next era began following the publication of ""Introduction to VLSI Systems"" by Carver Mead and Lynn Conway in 1980; considered the standard textbook for chip design. The result was an increase in the complexity of the chips that could be designed, with improved access to design verification tools that used logic simulation. The chips were easier to lay out and more likely to function correctly, since their designs could be simulated more thoroughly prior to construction. Although the languages and tools h" https://en.wikipedia.org/wiki/Variance%20Adaptive%20Quantization,"Variance Adaptive Quantization (VAQ) is a video encoding algorithm that was first introduced in the open source video encoder x264. According to Xvid Builds FAQ: ""It's an algorithm that tries to optimally choose a quantizer for each macroblock using advanced math algorithms."" It was later ported to programs which encode video content in other video standards, like MPEG-4 ASP or MPEG-2. In the case of Xvid, the algorithm is intended to make up for the earlier limitations in its Adaptive Quantization mode. The first Xvid library containing this improvement was released in February 2008." https://en.wikipedia.org/wiki/Terrainability,"The terrainability of a machine or robot is defined as its ability to negotiate terrain irregularities. Terrainability is a term coined in the research community and related to locomotion in the field of mobile robotics. Its various definitions generically describe the ability of the robot to handle various terrains in terms of their ground support, obstacle sizes and spacing, passive/dynamic stability, etc." https://en.wikipedia.org/wiki/Woody%20plant,"A woody plant is a plant that produces wood as its structural tissue and thus has a hard stem. In cold climates, woody plants further survive winter or dry season above ground, as opposed to herbaceous plants that die back to the ground until spring. Characteristics Woody plants are usually trees, shrubs, or lianas. These are usually perennial plants whose stems and larger roots are reinforced with wood produced from secondary xylem. The main stem, larger branches, and roots of these plants are usually covered by a layer of bark. Wood is a structural tissue that allows woody plants to grow from above ground stems year after year, thus making some woody plants the largest and tallest terrestrial plants. Woody plants, like herbaceous perennials, typically have a dormant period of the year when growth does not take place, in colder climates due to freezing temperatures and lack of daylight during the winter months, in subtropical and tropical climates due to the dry season when precipitation becomes minimal. The dormant period will be accompanied by shedding of leaves if the plant is deciduous. Evergreen plants do not lose all their leaves at once (they instead shed them gradually over the growing season), however growth virtually halts during the dormant season. Many woody plants native to subtropical regions and nearly all native to the tropics are evergreen due to year-round warm temperatures. During the fall months, each stem in a deciduous plant cuts off the flow of nutrients and water to the leaves. This causes them to change colors as the chlorophyll in the leaves breaks down. Special cells are formed that sever the connection between the leaf and stem, so that it will easily detach. Evergreen plants do not shed their leaves and merely go into a state of low activity during the dormant season. During spring, the roots begin sending nutrients back up to the canopy. When the growing season resumes, either with warm weather or the wet season, the plant will " https://en.wikipedia.org/wiki/Parametric%20family,"In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters. Common examples are parametrized (families of) functions, probability distributions, curves, shapes, etc. In probability and its applications For example, the probability density function of a random variable may depend on a parameter . In that case, the function may be denoted to indicate the dependence on the parameter . is not a formal argument of the function as it is considered to be fixed. However, each different value of the parameter gives a different probability density function. Then the parametric family of densities is the set of functions , where denotes the parameter space, the set of all possible values that the parameter can take. As an example, the normal distribution is a family of similarly-shaped distributions parametrized by their mean and their variance. In decision theory, two-moment decision models can be applied when the decision-maker is faced with random variables drawn from a location-scale family of probability distributions. In algebra and its applications In economics, the Cobb–Douglas production function is a family of production functions parametrized by the elasticities of output with respect to the various factors of production. In algebra, the quadratic equation, for example, is actually a family of equations parametrized by the coefficients of the variable and of its square and by the constant term. See also Indexed family" https://en.wikipedia.org/wiki/Intrinsic%20motivation%20%28artificial%20intelligence%29,"Intrinsic motivation in the study of artificial intelligence and robotics is a mechanism for enabling artificial agents (including robots) to exhibit inherently rewarding behaviours such as exploration and curiosity, grouped under the same term in the study of psychology. Psychologists consider intrinsic motivation in humans to be the drive to perform an activity for inherent satisfaction – just for the fun or challenge of it. Definition An intelligent agent is intrinsically motivated to act if the information content alone, or the experience resulting from the action, is the motivating factor. Information content in this context is measured in the information-theoretic sense of quantifying uncertainty. A typical intrinsic motivation is to search for unusual, surprising situations (exploration), in contrast to a typical extrinsic motivation such as the search for food (homeostasis). Extrinsic motivations are typically described in artificial intelligence as task-dependent or goal-directed. Origins in psychology The study of intrinsic motivation in psychology and neuroscience began in the 1950s with some psychologists explaining exploration through drives to manipulate and explore, however, this homeostatic view was criticised by White. An alternative explanation from Berlyne in 1960 was the pursuit of an optimal balance between novelty and familiarity. Festinger described the difference between internal and external view of the world as dissonance that organisms are motivated to reduce. A similar view was expressed in the '70s by Kagan as the desire to reduce the incompatibility between cognitive structure and experience. In contrast to the idea of optimal incongruity, Deci and Ryan identified in the mid 80's an intrinsic motivation based on competence and self-determination. Computational models An influential early computational approach to implement artificial curiosity in the early 1990s by Schmidhuber, has since been developed into a ""Formal theory of cr" https://en.wikipedia.org/wiki/Cobalt%20in%20biology,"Cobalt is essential to the metabolism of all animals. It is a key constituent of cobalamin, also known as vitamin B, the primary biological reservoir of cobalt as an ultratrace element. Bacteria in the stomachs of ruminant animals convert cobalt salts into vitamin B, a compound which can only be produced by bacteria or archaea. A minimal presence of cobalt in soils therefore markedly improves the health of grazing animals, and an uptake of 0.20 mg/kg a day is recommended because they have no other source of vitamin B. Proteins based on cobalamin use corrin to hold the cobalt. Coenzyme B12 features a reactive C-Co bond that participates in the reactions. In humans, B12 has two types of alkyl ligand: methyl and adenosyl. MeB12 promotes methyl (−CH3) group transfers. The adenosyl version of B12 catalyzes rearrangements in which a hydrogen atom is directly transferred between two adjacent atoms with concomitant exchange of the second substituent, X, which may be a carbon atom with substituents, an oxygen atom of an alcohol, or an amine. Methylmalonyl coenzyme A mutase (MUT) converts MMl-CoA to Su-CoA, an important step in the extraction of energy from proteins and fats. Although far less common than other metalloproteins (e.g. those of zinc and iron), other cobaltoproteins are known besides B12. These proteins include methionine aminopeptidase 2, an enzyme that occurs in humans and other mammals that does not use the corrin ring of B12, but binds cobalt directly. Another non-corrin cobalt enzyme is nitrile hydratase, an enzyme in bacteria that metabolizes nitriles. Cobalt deficiency In humans, consumption of cobalt-containing vitamin B12 meets all needs for cobalt. For cattle and sheep, which meet vitamin B12 needs via synthesis by resident bacteria in the rumen, there is a function for inorganic cobalt. In the early 20th century, during the development of farming on the North Island Volcanic Plateau of New Zealand, cattle suffered from what was termed ""bush sickne" https://en.wikipedia.org/wiki/Interplanetary%20contamination,"Interplanetary contamination refers to biological contamination of a planetary body by a space probe or spacecraft, either deliberate or unintentional. There are two types of interplanetary contamination: Forward contamination is the transfer of life and other forms of contamination from Earth to another celestial body. Back contamination is the introduction of extraterrestrial organisms and other forms of contamination into Earth's biosphere. It also covers infection of humans and human habitats in space and on other celestial bodies by extraterrestrial organisms, if such organisms exist. The main focus is on microbial life and on potentially invasive species. Non-biological forms of contamination have also been considered, including contamination of sensitive deposits (such as lunar polar ice deposits) of scientific interest. In the case of back contamination, multicellular life is thought unlikely but has not been ruled out. In the case of forward contamination, contamination by multicellular life (e.g. lichens) is unlikely to occur for robotic missions, but it becomes a consideration in crewed missions to Mars. Current space missions are governed by the Outer Space Treaty and the COSPAR guidelines for planetary protection. Forward contamination is prevented primarily by sterilizing the spacecraft. In the case of sample-return missions, the aim of the mission is to return extraterrestrial samples to Earth, and sterilization of the samples would make them of much less interest. So, back contamination would be prevented mainly by containment, and breaking the chain of contact between the planet of origin and Earth. It would also require quarantine procedures for the materials and for anyone who comes into contact with them. Overview Most of the Solar System appears hostile to life as we know it. No extraterrestrial life has ever been discovered. But if extraterrestrial life exists, it may be vulnerable to interplanetary contamination by foreign microorganism" https://en.wikipedia.org/wiki/Monogastric,"A monogastric organism has a simple single-chambered stomach (one stomach). Examples of monogastric herbivores are horses and rabbits. Examples of monogastric omnivores include humans, pigs, hamsters and rats. Furthermore, there are monogastric carnivores such as cats. A monogastric organism is comparable to ruminant organisms (which has a four-chambered complex stomach), such as cattle, goats, or sheep. Herbivores with monogastric digestion can digest cellulose in their diets by way of symbiotic gut bacteria. However, their ability to extract energy from cellulose digestion is less efficient than in ruminants. Herbivores digest cellulose by microbial fermentation. Monogastric herbivores which can digest cellulose nearly as well as ruminants are called hindgut fermenters, while ruminants are called foregut fermenters. These are subdivided into two groups based on the relative size of various digestive organs in relationship to the rest of the system: colonic fermenters tend to be larger species such as horses and rhinos, and cecal fermenters are smaller animals such as rabbits and rodents. Great apes derive significant amounts of phytanic acid from the hindgut fermentation of plant materials. Monogastrics cannot digest the fiber molecule cellulose as efficiently as ruminants, though the ability to digest cellulose varies amongst species. A monogastric digestive system works as soon as the food enters the mouth. Saliva moistens the food and begins the digestive process. (Note that horses have no (or negligible amounts of) amylase in their saliva). After being swallowed, the food passes from the esophagus into the stomach, where stomach acid and enzymes help to break down the food. Once food leaves the stomach and enters the small intestine, the pancreas secretes enzymes and alkali to neutralize the stomach acid." https://en.wikipedia.org/wiki/Jarman%E2%80%93Bell%20principle,"The Jarman–Bell principle is a concept in ecology that the food quality of a herbivore's intake decreases as the size of the herbivore increases, but the amount of such food increases to counteract the low quality foods. It operates by observing the allometric (non- linear scaling) properties of herbivores. The principle was coined by P.J Jarman (1968.) and R.H.V Bell (1971). Large herbivores can subsist on low quality food. Their gut size is larger than smaller herbivores. The increased size allows for better digestive efficiency, and thus allow viable consumption of low quality food. Small herbivores require more energy per unit of body mass compared to large herbivores. A smaller size, thus smaller gut size and lower efficiency, imply that these animals need to select high quality food to function. Their small gut limits the amount of space for food, so they eat low quantities of high quality diet. Some animals practice coprophagy, where they ingest fecal matter to recycle untapped/ undigested nutrients. However, the Jarman–Bell principle is not without exception. Small herbivorous members of mammals, birds and reptiles were observed to be inconsistent with the trend of small body mass being linked with high-quality food. There have also been disputes over the mechanism behind the Jarman–Bell principle; that larger body sizes does not increase digestive efficiency. The implications of larger herbivores ably subsisting on poor quality food compared smaller herbivores mean that the Jarman–Bell principle may contribute evidence for Cope's rule. Furthermore, the Jarman–Bell principle is also important by providing evidence for the ecological framework of ""resource partitioning, competition, habitat use and species packing in environments"" and has been applied in several studies. Links with allometry Allometry refers to the non-linear scaling factor of one variable with respect to another. The relationship between such variables is expressed as a power law, wher" https://en.wikipedia.org/wiki/Wigner%20distribution%20function,"The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis. The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution). Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms. Mathematical definition There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series , its non-stationary auto-covariance function is given by where denotes the average over all possible realizations of the process and is the mean, which may or may not be a function of time. The Wigner function is then given by first expressing the autocorrelation function in terms of the average time and time lag , and then Fourier transforming the lag. So for a single (mean-zero) time series, the Wigner function is simply given by The motivation for the Wigner function is that it reduces to " https://en.wikipedia.org/wiki/Hawkboard,"The Hawkboard is a low-power, low-cost Single-board computer based on the Texas Instruments OMAP-L138. Along with the usage of the OMAP ARM9 processor, it also has a floating point DSP. It is a community supported development platform. As of date, Hawkboard project is closed because of common hardware issue. External links An Open community portal for Texas Instruments AM1808 / OMAPL138 platform — hawkboard.org" https://en.wikipedia.org/wiki/Quadrature%20%28geometry%29,"In mathematics, particularly in geometry, quadrature (also called squaring) is a historical process of drawing a square with the same area as a given plane figure or computing the numerical value of that area. A classical example is the quadrature of the circle (or squaring the circle). Quadrature problems served as one of the main sources of problems in the development of calculus. They introduce important topics in mathematical analysis. History Antiquity Greek mathematicians understood the determination of an area of a figure as the process of geometrically constructing a square having the same area (squaring), thus the name quadrature for this process. The Greek geometers were not always successful (see squaring the circle), but they did carry out quadratures of some figures whose sides were not simply line segments, such as the lune of Hippocrates and the parabola. By a certain Greek tradition, these constructions had to be performed using only a compass and straightedge, though not all Greek mathematicians adhered to this dictum. For a quadrature of a rectangle with the sides a and b it is necessary to construct a square with the side (the geometric mean of a and b). For this purpose it is possible to use the following: if one draws the circle with diameter made from joining line segments of lengths a and b, then the height (BH in the diagram) of the line segment drawn perpendicular to the diameter, from the point of their connection to the point where it crosses the circle, equals the geometric mean of a and b. A similar geometrical construction solves the problems of quadrature of a parallelogram and of a triangle. Problems of quadrature for curvilinear figures are much more difficult. The quadrature of the circle with compass and straightedge was proved in the 19th century to be impossible. Nevertheless, for some figures a quadrature can be performed. The quadratures of the surface of a sphere and a parabola segment discovered by Archimedes became the" https://en.wikipedia.org/wiki/Aperture%20%28computer%20memory%29,"In computing, an aperture is a portion of physical address space (i.e. physical memory) that is associated with a particular peripheral device or a memory unit. Apertures may reach external devices such as ROM or RAM chips, or internal memory on the CPU itself. Typically, a memory device attached to a computer accepts addresses starting at zero, and so a system with more than one such device would have ambiguous addressing. To resolve this, the memory logic will contain several aperture selectors, each containing a range selector and an interface to one of the memory devices. The set of selector address ranges of the apertures are disjoint. When the CPU presents a physical address within the range recognized by an aperture, the aperture unit routes the request (with the address remapped to a zero base) to the attached device. Thus, apertures form a layer of address translation below the level of the usual virtual-to-physical mapping. See also Address bus AGP aperture Memory-mapped I/O External links Flash Memory Solutions Computer memory Computer architecture" https://en.wikipedia.org/wiki/Heath-Brown%E2%80%93Moroz%20constant,"The Heath-Brown–Moroz constant C, named for Roger Heath-Brown and Boris Moroz, is defined as where p runs over the primes. Application This constant is part of an asymptotic estimate for the distribution of rational points of bounded height on the cubic surface X03=X1X2X3. Let H be a positive real number and N(H) the number of solutions to the equation X03=X1X2X3 with all the Xi non-negative integers less than or equal to H and their greatest common divisor equal to 1. Then" https://en.wikipedia.org/wiki/Nutritional%20science,"Nutritional science (also nutrition science, sometimes short nutrition, dated trophology) is the science that studies the physiological process of nutrition (primarily human nutrition), interpreting the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism. History Before nutritional science emerged as an independent study disciplines, mainly chemists worked in this area. The chemical composition of food was examined. Macronutrients, especially protein, fat and carbohydrates, have been the focus components of the study of (human) nutrition since the 19th century. Until the discovery of vitamins and vital substances, the quality of nutrition was measured exclusively by the intake of nutritional energy. The early years of the 20th century were summarized by Kenneth John Carpenter in his Short History of Nutritional Science as ""the vitamin era"". The first vitamin was isolated and chemically defined in 1926 (thiamine). The isolation of vitamin C followed in 1932 and its effects on health, the protection against scurvy, was scientifically documented for the first time. At the instigation of the British physiologist John Yudkin at the University of London, the degrees Bachelor of Science and Master of Science in nutritional science were established in the 1950s. Nutritional science as a separate discipline was institutionalized in Germany in November 1956 when Hans-Diedrich Cremer was appointed to the chair for human nutrition in Giessen. The Institute for Nutritional Science was initially located at the Academy for Medical Research and Further Education, which was transferred to the Faculty of Human Medicine when the Justus Liebig University was reopened. Over time, seven other universities with similar institutions followed in Germany. From the 1950s to 1970s, a focus of nutritional science was on dietary fat and sugar. From the 1970s to the 1990s, attention was put on diet-related chronic diseas" https://en.wikipedia.org/wiki/List%20of%20group%20theory%20topics,"In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right. Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography. Structures and operations Central extension Direct product of groups Direct sum of groups Extension problem Free abelian group Free group Free product Generating set of a group Group cohomology Group extension Presentation of a group Product of group subsets Schur multiplier Semidirect product Sylow theorems Hall subgroup Wreath product Basic properties of groups Butterfly lemma Center of a group Centralizer and normalizer Characteristic subgroup Commutator Composition series Conjugacy class Conjugate closure Conjugation of isometries in Euclidean space Core (group) Coset Derived group Euler's theorem Fitting subgroup Generalized Fitting subgroup Hamiltonian group Identity element Lagrange's theorem Multiplicative inverse Normal subgroup Perfect group p-core Schreier refinement theorem Subgroup Transversal (combinatorics) Torsion subgroup Zassenhaus lemma Group homomorphisms Automorphism Automorphism group Factor group Fundamental theorem on homomorphisms Group homomorphism Group isomorphism Homomorphism Isomorphism theorem Inner automorphism Order auto" https://en.wikipedia.org/wiki/Sociome,"The Sociome is a concept used by scientists in Biology and Sociology referring to the dimensions of existence that are social. The term is also an indication of the convergence of systems biology and the study of society as a complex system that has begun to occur among early 21st Century scientists. Just as the phenome is typically thought of as the set of expressed phenotypes of an organism, the sociome can be thought of as the set of observed characteristics of societies. For example, while all societies consisting of humans might be thought of as having the potential to become egalitarian social democracies, not all observed societies are egalitarian or social democracies. Thus, the sociome can also be thought of indirectly as an ideal type of the unrealized potential of any given organization of social beings. Origin of term The first known usage of the term sociome was in 2001 by Daichi Kamiyama. The term has also been utilized by sociologist Adam Thomas Perzynski. The two scientists differ in their usage. Kamiyama's study describes a new scientific ""era of the sociome (Sociology[+ome])"" characterized by the study of the social activities of molecules. This usage is an anthropomorphism of social behavior, wherein molecules are described as having the ability to socialize. Perzynski's social scientific usage varies from this considerably. While Sociology is the study of society, behavior and social relationships, the sociome is the characterization and quantification of patterns, variables, activities, relationships and attributes across all societies that exist and can be studied. The suffix -ome has been used primarily in biology, as in genome, proteome, microbiome, metabolome and phenome. Basu and colleagues have used the term sociome to refer to a sort of standardized approach to the characterization of geocoded social attributes (e.g. neighborhood level). In 2014, Del Savio and colleagues discussed the blurring of the boundaries between disciplines, and" https://en.wikipedia.org/wiki/Clarifying%20agent,"Clarifying agents are used to remove suspended solids from liquids by inducing flocculation, causing the solids to form larger aggregates that can be easily removed after they either float to the surface or sink to the bottom of the containment vessel. Process Particles finer than 0.1 µm (10−7m) in water remain continuously in motion due to electrostatic charge (often negative) which causes them to repel each other. Once their electrostatic charge is neutralized by the use of a coagulant chemical, the finer particles start to collide and agglomerate (collect together) under the influence of Van der Waals forces. These larger and heavier particles are called flocs. Flocculants, or flocculating agents (also known as flocking agents), are chemicals that promote flocculation by causing colloids and other suspended particles in liquids to aggregate, forming a floc. Flocculants are used in water treatment processes to improve the sedimentation or filterability of small particles. For example, a flocculant may be used in swimming pool or drinking water filtration to aid removal of microscopic particles which would otherwise cause the water to be turbid (cloudy) and which would be difficult or impossible to remove by filtration alone. Many flocculants are multivalent cations such as aluminium, iron, calcium or magnesium. These positively charged molecules interact with negatively charged particles and molecules to reduce the barriers to aggregation. In addition, many of these chemicals, under appropriate pH and other conditions such as temperature and salinity, react with water to form insoluble hydroxides which, upon precipitating, link together to form long chains or meshes, physically trapping small particles into the larger floc. Long-chain polymer flocculants, such as modified polyacrylamides, are manufactured and sold by the flocculant producing business. These can be supplied in dry or liquid form for use in the flocculation process. The most common liquid polyac" https://en.wikipedia.org/wiki/Sooraj%20Surendran,"Sooraj Surendran is an Indian technologist and electronic engineering graduate from Anna University. He has made significant contributions to motorized wheelchair deployment in Tamil Nadu. Early life and education Surendran was born in Kollam, Kerala, India. His mother Sudha was a housewife and his father was K Surendran Pillai. He completed schooling in Sree Buddha, a Central Board of Secondary Education school in Karunagappalli, Kerala. He earned his degree from Anna University in electronic engineering. Career Surendran graduated from Anna University with a BTech in electronic engineering in 2011. He worked on motorized wheelchair design and nursing care bed electronic unit design, and developed an electronic system for nursing care beds that integrated Bluetooth technology to control the functions of a nursing care bed via an Android application. Surendran was invited to help develop electronic control units for lightweight motorized wheelchairs as a part of a Tamil Nadu program to distribute motorized wheelchairs to 2,000 people." https://en.wikipedia.org/wiki/Sums%20of%20powers,"In mathematics and statistics, sums of powers occur in a number of contexts: Sums of squares arise in many contexts. For example, in geometry, the Pythagorean theorem involves the sum of two squares; in number theory, there are Legendre's three-square theorem and Jacobi's four-square theorem; and in statistics, the analysis of variance involves summing the squares of quantities. Faulhaber's formula expresses as a polynomial in , or alternatively in terms of a Bernoulli polynomial. Fermat's right triangle theorem states that there is no solution in positive integers for and . Fermat's Last Theorem states that is impossible in positive integers with . The equation of a superellipse is . The squircle is the case , . Euler's sum of powers conjecture (disproved) concerns situations in which the sum of integers, each a th power of an integer, equals another th power. The Fermat-Catalan conjecture asks whether there are an infinitude of examples in which the sum of two coprime integers, each a power of an integer, with the powers not necessarily equal, can equal another integer that is a power, with the reciprocals of the three powers summing to less than 1. Beal's conjecture concerns the question of whether the sum of two coprime integers, each a power greater than 2 of an integer, with the powers not necessarily equal, can equal another integer that is a power greater than 2. The Jacobi–Madden equation is in integers. The Prouhet–Tarry–Escott problem considers sums of two sets of th powers of integers that are equal for multiple values of . A taxicab number is the smallest integer that can be expressed as a sum of two positive third powers in distinct ways. The Riemann zeta function is the sum of the reciprocals of the positive integers each raised to the power , where is a complex number whose real part is greater than 1. The Lander, Parkin, and Selfridge conjecture concerns the minimal value of in Waring's problem asks whether for every natural number ther" https://en.wikipedia.org/wiki/Anti-hijack%20system,"An anti-hijack system is an electronic system fitted to motor vehicles to deter criminals from hijacking them. Although these types of systems are becoming more common on newer cars, they have not caused a decrease in insurance premiums as they are not as widely known as other more common anti-theft systems such as alarms or steering locks. It can also be a part of an alarm or immobiliser system. An approved anti-hijacking system will achieve a safe, quick shutdown of the vehicle it is attached to. There are also mechanical anti-hijack devices. Diversify Solutions, a company in South Africa, has announced its research and development at the Nelson Mandela University of a GSM based Anti hijacking system. The system works off a verification process with added features such as alcohol sensors and signal jamming capabilities, this comes after increasing rates of hijackings in South Africa and alarming rates of accidents caused by driving under the influence and texting whilst driving. Technology There are three basic principles on which the systems work. Lockout A lockout system is armed when the driver turns the ignition key to the on position and carries out a specified action, usually flicking a hidden switch or depressing the brake pedal twice. It is activated when the vehicle drops below a certain speed or becomes stationary, and will cause all of the vehicle's doors to automatically lock, to prevent against thieves stealing the vehicle when it is stopped, for example at a traffic light or pedestrian crossing. Transponder A transponder system is a system which is always armed until a device, usually a small RFID transponder, enters the vehicle's transmitter radius. Since the device is carried by the driver, usually in their wallet or pocket, if the driver leaves the immediate vicinity of the vehicle, so will the transponder, causing the system to assume the vehicle has been hijacked and disable it. As the transponder itself is concealed, the thief woul" https://en.wikipedia.org/wiki/Brown%20measure,"In mathematics, the Brown measure of an operator in a finite factor is a probability measure on the complex plane which may be viewed as an analog of the spectral counting measure (based on algebraic multiplicity) of matrices. It is named after Lawrence G. Brown. Definition Let be a finite factor with the canonical normalized trace and let be the identity operator. For every operator the function is a subharmonic function and its Laplacian in the distributional sense is a probability measure on which is called the Brown measure of Here the Laplace operator is complex. The subharmonic function can also be written in terms of the Fuglede−Kadison determinant as follows See also" https://en.wikipedia.org/wiki/Colonel%20Meow,"Colonel Meow (October 11, 2011 – January 29, 2014) was an American Himalayan–Persian crossbreed cat, who temporarily held the 2014 Guinness world record for the longest fur on a cat (nine inches or about 23 cm). He became an Internet celebrity when his owners posted pictures of his scowling face to Facebook and Instagram. He was lovingly known by his hundreds of thousands of followers as an ""adorable fearsome dictator"", a ""prodigious Scotch drinker"" and ""the angriest cat in the world"". Background Colonel Meow was rescued by Seattle Persian and Himalayan Rescue and was later adopted at a Petco by his owner Anne Avey. He rose to internet fame after his owner posted a picture of his angry-looking scowl to Facebook and Instagram. Health complications and death In November 2013, Colonel Meow was hospitalized due to heart problems and underwent a difficult surgery and blood transfusion. On January 30, 2014, his owner announced on Facebook that Colonel Meow had died. She also expressed gratitude for the support of his more than 350,000 followers. In July 2014, Friskies posted an ad entitled ""Cat Summer"" and announced that for each view they would donate one meal to needy cats in Colonel Meow's name. The video stars Grumpy Cat as well as other famous internet cats. See also Lil Bub List of individual cats Notes" https://en.wikipedia.org/wiki/List%20of%20United%20States%20regional%20mathematics%20competitions,"Many math competitions in the United States have regional restrictions. Of these, most are statewide. For a more complete list, please visit here . The contests include: Alabama Alabama Statewide High School Mathematics Contest Virgil Grissom High School Math Tournament Vestavia Hills High School Math Tournament Arizona Great Plains Math League AATM State High School Contest California Bay Area Math Olympiad Lawrence Livermore National Laboratories Annual High School Math Challenge Cal Poly Math Contest and Trimathlon Polya Competition Bay Area Math Meet College of Creative Studies Math Competition LA Math Cup Math Day at the Beach hosted by CSULB Math Field Day for San Diego Middle Schools Mesa Day Math Contest at UC Berkeley Santa Barbara County Math Superbowl Pomona College Mathematical Talent Search Redwood Empire Mathematics Tournament hosted by Humboldt State (middle and high school) San Diego Math League and San Diego Math Olympiad hosted by the San Diego Math Circle Santa Clara University High School Mathematics Contest SC Mathematics Competition (SCMC) hosted by RSO@USC Stanford Mathematics Tournament UCSD/GSDMC High School Honors Mathematics Contest Colorado Colorado Mathematics Olympiad District of Columbia Moody's Mega Math Florida Florida-Stuyvesant Alumni Mathematics Competition David Essner Mathematics Competition James S. Rickards High School Fall Invitational FAMAT Regional Competitions: January Regional February Regional March Regional FGCU Math Competition Georgia Central Math Meet(grades 9 - 12) GA Council of Teachers of Mathematics State Varsity Math Tournament STEM Olympiads Of America Math, Science & Cyber Olympiads (grades 3 - 8) Valdosta State University Middle Grades Mathematics Competition Illinois ICTM math contest (grades 3–12) Indiana [IUPUI High School Math Contest] (grades 9–12) Huntington University Math Competition (grades 6–12) Indiana Math League IASP Academic Super Bowl Rose-Hulman H" https://en.wikipedia.org/wiki/List%20of%20repunit%20primes,"This is a list of repunit primes. Base 2 repunit primes Base-2 repunit primes are called Mersenne primes. Base 3 repunit primes The first few base-3 repunit primes are 13, 1093, 797161, 3754733257489862401973357979128773, 6957596529882152968992225251835887181478451547013 , corresponding to of 3, 7, 13, 71, 103, 541, 1091, 1367, 1627, 4177, 9011, 9551, 36913, 43063, 49681, 57917, 483611, 877843, 2215303, 2704981, 3598867, ... . Base 4 repunit primes The only base-4 repunit prime is 5 (). , and 3 always divides when n is odd and when n is even. For n greater than 2, both and are greater than 3, so removing the factor of 3 still leaves two factors greater than 1. Therefore, the number cannot be prime. Base 5 repunit primes The first few base-5 repunit primes are 31, 19531, 12207031, 305175781, 177635683940025046467781066894531, 14693679385278593849609206715278070972733319459651094018859396328480215743184089660644531, 35032461608120426773093239582247903282006548546912894293926707097244777067146515037165954709053039550781, 815663058499815565838786763657068444462645532258620818469829556933715405574685778402862015856733535201783524826169013977050781 , corresponding to of 3, 7, 11, 13, 47, 127, 149, 181, 619, 929, 3407, 10949, 13241, 13873, 16519, 201359, 396413, 1888279, 3300593, ... . Base 6 repunit primes The first few base-6 repunit primes are 7, 43, 55987, 7369130657357778596659, 3546245297457217493590449191748546458005595187661976371, 133733063818254349335501779590081460423013416258060407531857720755181857441961908284738707408499507 , corresponding to of 2, 3, 7, 29, 71, 127, 271, 509, 1049, 6389, 6883, 10613, 19889, 79987, 608099, 1365019, 3360347, ... . Base 7 repunit primes The first few base-7 repunit primes are 2801, 16148168401, 85053461164796801949539541639542805770666392330682673302530819774105141531698707146930307290253537320447270457,1385022127101034087007743810331355039266633249933176317292277906573251633103418332277759454260526" https://en.wikipedia.org/wiki/TI%20StarterWare,"StarterWare was initially developed by TI as a free software package catering to their arm A8 and A9 microprocessors. Its primary purpose was to offer drivers and libraries with a consistent API tailored for processors within these microprocessor families. The package encompassed utilities and illustrative use cases across various applications. Despite TI's diminished active backing, the software lingers in open-source repositories on GitHub, primarily upholding support for widely used beagle boards that make use of these processors. This software collection closely aligns with what many chip manufacturers refer to as a HAL (Hardware Abstraction Layer). In TI's context, it's termed DAL (Device Abstraction Layer). Its role revolves around furnishing fundamental functionalities and an API that an operating system can conveniently adapt to. For those inclined to create baremetal programs by directly engaging with the starterware API, the package also offered documentation and assistance. Texas Instruments Embedded systems System software" https://en.wikipedia.org/wiki/Uniqueness%20theorem,"In mathematics, a uniqueness theorem, also called a unicity theorem, is a theorem asserting the uniqueness of an object satisfying certain conditions, or the equivalence of all objects satisfying the said conditions. Examples of uniqueness theorems include: Alexandrov's uniqueness theorem of three-dimensional polyhedra Black hole uniqueness theorem Cauchy–Kowalevski theorem is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. Cauchy–Kowalevski–Kashiwara theorem is a wide generalization of the Cauchy–Kowalevski theorem for systems of linear partial differential equations with analytic coefficients. Division theorem, the uniqueness of quotient and remainder under Euclidean division. Fundamental theorem of arithmetic, the uniqueness of prime factorization. Holmgren's uniqueness theorem for linear partial differential equations with real analytic coefficients. Picard–Lindelöf theorem, the uniqueness of solutions to first-order differential equations. Thompson uniqueness theorem in finite group theory Uniqueness theorem for Poisson's equation Electromagnetism uniqueness theorem for the solution of Maxwell's equation Uniqueness case in finite group theory The word unique is sometimes replaced by essentially unique, whenever one wants to stress that the uniqueness is only referred to the underlying structure, whereas the form may vary in all ways that do not affect the mathematical content. A uniqueness theorem (or its proof) is, at least within the mathematics of differential equations, often combined with an existence theorem (or its proof) to a combined existence and uniqueness theorem (e.g., existence and uniqueness of solution to first-order differential equations with boundary condition). See also Existence theorem Rigidity (mathematics) Uniqueness quantification" https://en.wikipedia.org/wiki/Librem,"Librem is a line of computers manufactured by Purism, SPC featuring free (libre) software. The laptop line is designed to protect privacy and freedom by providing no non-free (proprietary) software in the operating system or kernel, avoiding the Intel Active Management Technology, and gradually freeing and securing firmware. Librem laptops feature hardware kill switches for the microphone, webcam, Bluetooth and Wi-Fi. Models Laptops Librem 13, Librem 15 and Librem 14 In 2014, Purism launched a crowdfunding campaign on Crowd Supply to fund the creation and production of the Librem 15 laptop, conceived as a modern alternative to existing open-source hardware laptops, all of which used older hardware. The in the name refers to its 15-inch screen size. The campaign succeeded after extending the original campaign, and the laptops were shipped to backers. In a second revision of the laptop, hardware kill switches for the camera, microphone, Wi-Fi, and Bluetooth were added. After the successful launch of the Librem 15, Purism created another campaign on Crowd Supply for a 13-inch laptop called the Librem 13, which also came with hardware kill switches similar to those on the Librem 15v2. The campaign was again successful and the laptops were shipped to customers. Purism announced in December 2016 that it would start shipping from inventory rather than building to order with the new batches of Librem 15 and 13. , Purism has one laptop model in production, the Librem 14 (version 1, US$1,370). Comparison of laptops Librem Mini The Librem Mini is a small form factor desktop computer, which began shipping in June 2020. Librem 5 On August 24, 2017, Purism started a crowdfunding campaign for the Librem 5, a smartphone aimed to run 100% free software, which would ""[focus] on security by design and privacy protection by default"". Purism claimed that the phone would become ""the world's first ever IP-native mobile handset, using end-to-end encrypted decentralized communica" https://en.wikipedia.org/wiki/Human%20waste,"Human waste (or human excreta) refers to the waste products of the human digestive system, menses, and human metabolism including urine and feces. As part of a sanitation system that is in place, human waste is collected, transported, treated and disposed of or reused by one method or another, depending on the type of toilet being used, ability by the users to pay for services and other factors. Fecal sludge management is used to deal with fecal matter collected in on-site sanitation systems such as pit latrines and septic tanks. The sanitation systems in place differ vastly around the world, with many people in developing countries having to resort to open defecation where human waste is deposited in the environment, for lack of other options. Improvements in ""water, sanitation and hygiene"" (WASH) around the world is a key public health issue within international development and is the focus of Sustainable Development Goal 6. People in developed countries tend to use flush toilets where the human waste is mixed with water and transported to sewage treatment plants. Children's excreta can be disposed of in diapers and mixed with municipal solid waste. Diapers are also sometimes dumped directly into the environment, leading to public health risks. Terminology The term ""human waste"" is used in the general media to mean several things, such as sewage, sewage sludge, blackwater - in fact anything that may contain some human feces. In the stricter sense of the term, human waste is in fact human excreta, i.e. urine and feces, with or without water being mixed in. For example, dry toilets collect human waste without the addition of water. Health aspects Human waste is considered a biowaste, as it is a vector for both viral and bacterial diseases. It can be a serious health hazard if it gets into sources of drinking water. The World Health Organization (WHO) reports that nearly 2.2 million people die annually from diseases caused by contaminated water, such as cho" https://en.wikipedia.org/wiki/SEA-PHAGES,"SEA-PHAGES stands for Science Education Alliance-Phage Hunters Advancing Genomics and Evolutionary Science; it was formerly called the National Genomics Research Initiative. This was the first initiative launched by the Howard Hughes Medical Institute (HHMI) Science Education Alliance (SEA) by their director Tuajuanda C. Jordan in 2008 to improve the retention of Science, technology, engineering, and mathematics (STEM) students. SEA-PHAGES is a two-semester undergraduate research program administered by the University of Pittsburgh's Graham Hatfull's group and the Howard Hughes Medical Institute's Science Education Division. Students from over 100 universities nationwide engage in authentic individual research that includes a wet-bench laboratory and a bioinformatics component. Curriculum During the first semester of this program, classes of around 18-24 undergraduate students work under the supervision of one or two university faculty members and a graduate student assistant—who have completed two week-long training workshops—to isolate and characterize their own personal bacteriophage that infects a specific bacterial host cell from local soil samples. Once students have successfully isolated a phage, they are able to classify them by visualizing them through Electron microscope (EM) images. Also, DNA is extracted and purified by the students, and one sample is sent for sequencing to be ready for the second semester's curriculum. The second semester consists of the annotation of the genome the class sent to be sequenced. In that case, students work together to evaluate the genes for start-stop coordinates, ribosome-binding sites, and possible functions of those proteins in which the sequence codes. Once the annotation is completed, it is submitted to the National Center for Biotechnology Information's (NCBI) DNA sequence database GenBank. If there is still time in the semester or the sent DNA was not able to be sequenced, the class could request genome file fro" https://en.wikipedia.org/wiki/Directory-based%20coherence,"Directory-based coherence is a mechanism to handle Cache coherence problem in Distributed shared memory (DSM) a.k.a. Non-Uniform Memory Access (NUMA). Another popular way is to use a special type of computer bus between all the nodes as a ""shared bus"" (a.k.a. System bus). Directory-based coherence uses a special directory to serve instead of the shared bus in the bus-based coherence protocols. Both of these designs use the corresponding medium (i.e. directory or bus) as a tool to facilitate the communication between different nodes, and to guarantee that the coherence protocol is working properly along all the communicating nodes. In directory based cache coherence, this is done by using this directory to keep track of the status of all cache blocks, the status of each block includes in which cache coherence ""state"" that block is, and which nodes are sharing that block at that time, which can be used to eliminate the need to broadcast all the signals to all nodes, and only send it to the nodes that are interested in this single block. Following are a few advantages and disadvantages of the directory based cache coherence protocol: Scalability: This is one of the strongest motivations for going to directory based designs. What we mean by scalability, in short, is how good a specific system is in handling the growing amount of work that it is responsible to do . For this criteria, Bus based systems cannot do well due to the limitation caused when having a shared bus that all nodes are using in the same time. For a relatively small number of nodes, bus systems can do well. However, while the number of nodes is growing, some problems may occur in this regard. Especially since only one node is allowed to use the bus at a time, which will significantly harm the performance of the overall system. On the other hand, using directory-based systems, there will be no such bottleneck to constrain the scalability of the system. Simplicity: This is one of the points where the " https://en.wikipedia.org/wiki/List%20of%20set%20theory%20topics,"This page is a list of articles related to set theory. Articles on individual set theory topics Lists related to set theory Glossary of set theory List of large cardinal properties List of properties of sets of reals List of set identities and relations Set theorists Societies and organizations Association for Symbolic Logic The Cabal Topics Set theory" https://en.wikipedia.org/wiki/List%20of%20mathematics%20history%20topics,"This is a list of mathematics history topics, by Wikipedia page. See also list of mathematicians, timeline of mathematics, history of mathematics, list of publications in mathematics. 1729 (anecdote) Adequality Archimedes Palimpsest Archimedes' use of infinitesimals Arithmetization of analysis Brachistochrone curve Chinese mathematics Cours d'Analyse Edinburgh Mathematical Society Erlangen programme Fermat's Last Theorem Greek mathematics Thomas Little Heath Hilbert's problems History of topos theory Hyperbolic quaternion Indian mathematics Islamic mathematics Italian school of algebraic geometry Kraków School of Mathematics Law of Continuity Lwów School of Mathematics Nicolas Bourbaki Non-Euclidean geometry Scottish Café Seven bridges of Königsberg Spectral theory Synthetic geometry Tautochrone curve Unifying theories in mathematics Waring's problem Warsaw School of Mathematics Academic positions Lowndean Professor of Astronomy and Geometry Lucasian professor Rouse Ball Professor of Mathematics Sadleirian Chair See also History" https://en.wikipedia.org/wiki/Processing%20delay,"In a network based on packet switching, processing delay is the time it takes routers to process the packet header. Processing delay is a key component in network delay. During processing of a packet, routers may check for bit-level errors in the packet that occurred during transmission as well as determining where the packet's next destination is. Processing delays in high-speed routers are typically on the order of microseconds or less. After this nodal processing, the router directs the packet to the queue where further delay can happen (queuing delay). In the past, the processing delay has been ignored as insignificant compared to the other forms of network delay. However, in some systems, the processing delay can be quite large especially where routers are performing complex encryption algorithms and examining or modifying packet content. Deep packet inspection done by some networks examine packet content for security, legal, or other reasons, which can cause very large delay and thus is only done at selected inspection points. Routers performing network address translation also have higher than normal processing delay because those routers need to examine and modify both incoming and outgoing packets. See also Latency (engineering)" https://en.wikipedia.org/wiki/Cross-covariance,"In probability and statistics, given two stochastic processes and , the cross-covariance is a function that gives the covariance of one process with the other at pairs of time points. With the usual notation for the expectation operator, if the processes have the mean functions and , then the cross-covariance is given by Cross-covariance is related to the more commonly used cross-correlation of the processes in question. In the case of two random vectors and , the cross-covariance would be a matrix (often denoted ) with entries Thus the term cross-covariance is used in order to distinguish this concept from the covariance of a random vector , which is understood to be the matrix of covariances between the scalar components of itself. In signal processing, the cross-covariance is often called cross-correlation and is a measure of similarity of two signals, commonly used to find features in an unknown signal by comparing it to a known one. It is a function of the relative time between the signals, is sometimes called the sliding dot product, and has applications in pattern recognition and cryptanalysis. Cross-covariance of random vectors Cross-covariance of stochastic processes The definition of cross-covariance of random vectors may be generalized to stochastic processes as follows: Definition Let and denote stochastic processes. Then the cross-covariance function of the processes is defined by: where and . If the processes are complex-valued stochastic processes, the second factor needs to be complex conjugated: Definition for jointly WSS processes If and are a jointly wide-sense stationary, then the following are true: for all , for all and for all By setting (the time lag, or the amount of time by which the signal has been shifted), we may define . The cross-covariance function of two jointly WSS processes is therefore given by: which is equivalent to . Uncorrelatedness Two stochastic processes and are called uncorrelated i" https://en.wikipedia.org/wiki/Apache%20Celix,"Apache Celix is an open-source implementation of the OSGi specification adapted to C and C++ developed by the Apache Software Foundation. The project aims to provide a framework to develop (dynamic) modular software applications using component and/or service-oriented programming. Apache Celix is primarily developed in C and adds an additional abstraction, in the form of a library, to support for C++. Modularity in Apache Celix is achieved by supporting - run-time installed - bundles. Bundles are zip files and can contain software modules in the form of shared libraries. Modules can provide and request dynamic services, for and from other modules, by interacting with a provided bundle context. Services in Apache Celix are ""plain old"" structs with function pointers or ""plain old C++ Objects"" (POCO). History Apache Celix was welcomed in the Apache Incubator at November 2010 and graduated to Top Level Project from the Apache Incubator in July 2014." https://en.wikipedia.org/wiki/Scanning%20mobility%20particle%20sizer,"A scanning mobility particle sizer (SMPS) is an analytical instrument that measures the size and number concentration of aerosol particles with diameters from 2.5 nm to 1000 nm. They employ a continuous, fast-scanning technique to provide high-resolution measurements. Applications The particles that are investigated can be of biological or chemical nature. The instrument can be used for air quality measurement indoors, vehicle exhaust, research in bioaerosols, atmospheric studies, and toxicology testing." https://en.wikipedia.org/wiki/Perfect%20fluid,"In physics, a perfect fluid or ideal fluid is a fluid that can be completely characterized by its rest frame mass density and isotropic pressure p. Real fluids are ""sticky"" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction. Quark–gluon plasma is the closest known substance to a perfect fluid. In space-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity vector field of the fluid and where is the metric tensor of Minkowski spacetime. In time-positive metric signature tensor notation, the stress–energy tensor of a perfect fluid can be written in the form where U is the 4-velocity of the fluid and where is the metric tensor of Minkowski spacetime. This takes on a particularly simple form in the rest frame where is the energy density and is the pressure of the fluid. Perfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids. Perfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann–Lemaître–Robertson–Walker equations to describe the evolution of the universe. In general relativity, the expression for the stress–energy tensor of a perfect fluid is written as where U is the 4-velocity vector field of the fluid and where is the inverse metric, written with a space-positive signature. See also Equation of state Ideal gas Fluid solutions in general relativity Potential flow" https://en.wikipedia.org/wiki/Orban%20%28audio%20processing%29,"Orban is an international company making audio processors for radio, television and Internet broadcasters. It has been operating since founder Bob Orban sold his first product in 1967. The company was originally based in San Francisco, California. History The Orban company started in 1967 when Bob Orban built and sold his first product, a stereo synthesizer, to WOR-FM in New York City, a year before Orban earned his master's degree from Stanford University. He teamed with synthesizer pioneers Bernie Krause and Paul Beaver to promote his products. In 1970, Orban established manufacturing and design in San Francisco. Bob Orban partnered with John Delantoni to form Orban Associates in 1975. The company was bought by Harman International in 1989, and the firm moved to nearby San Leandro in 1991. In 2000, Orban was bought by Circuit Research Labs (CRL) who moved manufacturing to Tempe, Arizona, in 2005, keeping the design team in the San Francisco Bay Area. Orban expanded into Germany in 2006 by purchasing Dialog4 System Engineering in Ludwigsburg. Orban USA acquired the company in 2009, based in Arizona. The Orban company was acquired by Daysequerra in 2016, moving manufacturing to New Jersey. In 2020, Orban Labs consolidated divisions and streamlined operations, with Orban Europe GmbH assuming responsibility for all Orban product sales worldwide. Over its years of trading, the Orban company has released many well-known audio-processing products, including the Orban Optimod 8000, which was the first audio processor to include FM processing and a stereo generator under one package, an innovative idea at the time, as no other processor took into account 75 μs pre-emphasis curve employed by FM, which leads to low average modulation and many peaks. This was followed by the Orban Optimod 8100, which went on to become the company's most successful product, and the Orban Optimod 8200, the first successful digital signal processor. It was entirely digital and featured a two" https://en.wikipedia.org/wiki/Hindgut%20fermentation,"Hindgut fermentation is a digestive process seen in monogastric herbivores, animals with a simple, single-chambered stomach. Cellulose is digested with the aid of symbiotic bacteria. The microbial fermentation occurs in the digestive organs that follow the small intestine: the large intestine and cecum. Examples of hindgut fermenters include proboscideans and large odd-toed ungulates such as horses and rhinos, as well as small animals such as rodents, rabbits and koalas. In contrast, foregut fermentation is the form of cellulose digestion seen in ruminants such as cattle which have a four-chambered stomach, as well as in sloths, macropodids, some monkeys, and one bird, the hoatzin. Cecum Hindgut fermenters generally have a cecum and large intestine that are much larger and more complex than those of a foregut or midgut fermenter. Research on small cecum fermenters such as flying squirrels, rabbits and lemurs has revealed these mammals to have a GI tract about 10-13 times the length of their body. This is due to the high intake of fiber and other hard to digest compounds that are characteristic to the diet of monogastric herbivores. Unlike in foregut fermenters, the cecum is located after the stomach and small intestine in monogastric animals, which limits the amount of further digestion or absorption that can occur after the food is fermented. Large intestine In smaller hindgut fermenters of the order Lagomorpha (rabbits, hares, and pikas), cecotropes formed in the cecum are passed through the large intestine and subsequently reingested to allow another opportunity to absorb nutrients. Cecotropes are surrounded by a layer of mucus which protects them from stomach acid but which does not inhibit nutrient absorption in the small intestine. Coprophagy is also practiced by some rodents, such as the capybara, guinea pig and related species, and by the marsupial common ringtail possum. This process is also beneficial in allowing for restoration of the microflora pop" https://en.wikipedia.org/wiki/Soft-bodied%20organism,"Soft-bodied organisms are animals that lack skeletons. The group roughly corresponds to the group Vermes as proposed by Carl von Linné. All animals have muscles but, since muscles can only pull, never push, a number of animals have developed hard parts that the muscles can pull on, commonly called skeletons. Such skeletons may be internal, as in vertebrates, or external, as in arthropods. However, many animals groups do very well without hard parts. This include animals such as earthworms, jellyfish, tapeworms, squids and an enormous variety of animals from almost every part of the kingdom Animalia. Commonality Most soft-bodied animals are small, but they do make up the majority of the animal biomass. If we were to weigh up all animals on Earth with hard parts against soft-bodied ones, estimates indicate that the biomass of soft-bodied animals would be at least twice that of animals with hard parts, quite possibly much larger. Particularly the roundworms are extremely numerous. The nematodologist Nathan Cobb described the ubiquitous presence of nematodes on Earth as follows: ""In short, if all the matter in the universe except the nematodes were swept away, our world would still be dimly recognizable, and if, as disembodied spirits, we could then investigate it, we should find its mountains, hills, vales, rivers, lakes, and oceans represented by a film of nematodes. The location of towns would be decipherable, since for every massing of human beings there would be a corresponding massing of certain nematodes. Trees would still stand in ghostly rows representing our streets and highways. The location of the various plants and animals would still be decipherable, and, had we sufficient knowledge, in many cases even their species could be determined by an examination of their erstwhile nematode parasites."" Anatomy Not being a true phylogenetic group, soft-bodied organisms vary enormously in anatomy. Cnidarians and flatworms have a single opening to the gut and a d" https://en.wikipedia.org/wiki/Stevens%20Award,"The Stevens Award is a software engineering lecture award given by the Reengineering Forum, an industry association. The international Stevens Award was created to recognize outstanding contributions to the literature or practice of methods for software and systems development. The first award was given in 1995. The presentations focus on the current state of software methods and their direction for the future. This award lecture is named in memory of Wayne Stevens (1944-1993), a consultant, author, pioneer, and advocate of the practical application of software methods and tools. The Stevens Award and lecture is managed by the Reengineering Forum. The award was founded by International Workshop on Computer Aided Software Engineering (IWCASE), an international workshop association of users and developers of computer-aided software engineering (CASE) technology, which merged into The Reengineering Forum. Wayne Stevens was a charter member of the IWCASE executive board. Recipients 1995: Tony Wasserman 1996: David Harel 1997: Michael Jackson 1998: Thomas McCabe 1999: Tom DeMarco 2000: Gerald Weinberg 2001: Peter Chen 2002: Cordell Green 2003: Manny Lehman 2004: François Bodart 2005: Mary Shaw, Jim Highsmith 2006: Grady Booch 2007: Nicholas Zvegintzov 2008: Harry Sneed 2009: Larry Constantine 2010: Peter Aiken 2011: Jared Spool, Barry Boehm 2012: Philip Newcomb 2013: Jean-Luc Hainaut 2014: François Coallier 2015: Pierre Bourque See also List of computer science awards" https://en.wikipedia.org/wiki/Bibliography%20of%20encyclopedias%3A%20biology,"This is a list of encyclopedias as well as encyclopedic and biographical dictionaries published on the subject of biology in any language. Entries are in the English language unless specifically stated as otherwise. General biology Becher, Anne, Joseph Richey. American environmental leaders: From colonial times to the present. Grey House, 2008. . Butcher, Russell D., Stephen E. Adair, Lynn A. Greenwalt. America's national wildlife refuges: A complete guide. Roberts Rinehart Publishers in cooperation with Ducks Unlimited, 2003. . Ecological Internet, Inc. EcoEarth.info: Environment portal and search engine. Ecological Internet, Inc. . Friday, Adrian & Davis S. Ingram. The Cambridge Encyclopedia of Life Sciences. Cambridge, 1985. Gaither, Carl C., Alma E. Cavazos-Gaither, Andrew Slocombe. Naturally speaking: A dictionary of quotations on biology, botany, nature and zoology. Institute of Physics, 2001. . Gibson, Daniel, National Audubon Society. Audubon guide to the national wildlife refuges. Southwest: Arizona, Nevada, New Mexico, Texas. St. Martin's Griffin, 2000. . Goudie, Andrew, David J. Cuff. Encyclopedia of global change: Environmental change and human society. Oxford University Press, 2002. . Gove, Doris. Audubon guide to the national wildlife refuges. Southeast : Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, Puerto Rico, South Carolina, Tennessee, U.S. Virgin Islands. St. Martin's Griffin, 2000. . Grassy, John. Audubon guide to the national wildlife refuges: Northern Midwest: Illinois, Indiana, Iowa, Michigan, Minnesota, Nebraska, North Dakota, Ohio, South Dakota, Wisconsin. St. Martin's Griffin, c2000. . Grassy, John. Audubon guide to the national wildlife refuges: Rocky Mountains: Colorado, Idaho, Montana, Utah, Wyoming. St. Martin's Griffin, 2000. . Gray, Peter. Encyclopedia of the Biological Sciences. Krieger, 1981. Grinstein, Louise S., Carol A. Biermann, Rose K. Rose. Women in the biological sciences: A biobibliographic sourceboo" https://en.wikipedia.org/wiki/Supercooling,"Supercooling, also known as undercooling, is the process of lowering the temperature of a liquid below its freezing point without it becoming a solid. It is achieved in the absence of a seed crystal or nucleus around which a crystal structure can form. The supercooling of water can be achieved without any special techniques other than chemical demineralization, down to . Droplets of supercooled water often exist in stratus and cumulus clouds. An aircraft flying through such a cloud sees an abrupt crystallization of these droplets, which can result in the formation of ice on the aircraft's wings or blockage of its instruments and probes. Animals rely on different phenomena with similar effects to survive in extreme temperatures. There are many other mechanisms that aid in maintaining a liquid state, such as the production of antifreeze proteins, which bind to ice crystals to prevent water molecules from binding and spreading the growth of ice. The winter flounder is one such fish that utilizes these proteins to survive in its frigid environment. This is not strictly supercooling because is the result of freezing point lowering caused by the presence of proteins. In plants, cellular barriers such as lignin, suberin, and the cuticle inhibit ice nucleators and force water into the supercooled tissue. Explanation A liquid crossing its standard freezing point will crystalize in the presence of a seed crystal or nucleus around which a crystal structure can form creating a solid. Lacking any such nuclei, the liquid phase can be maintained all the way down to the temperature at which crystal homogeneous nucleation occurs. Homogeneous nucleation can occur above the glass transition temperature, but if homogeneous nucleation has not occurred above that temperature, an amorphous (non-crystalline) solid will form. Water normally freezes at , but it can be ""supercooled"" at standard pressure down to its crystal homogeneous nucleation at almost . The process of supercooling " https://en.wikipedia.org/wiki/Calcium%20in%20biology,"Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation. Plasma calcium levels in mammals are tightly regulated, with bone acting as the major mineral storage site. Calcium ions, Ca2+, are released from bone into the bloodstream under controlled conditions. Calcium is transported through the bloodstream as dissolved ions or bound to proteins such as serum albumin. Parathyroid hormone secreted by the parathyroid gland regulates the resorption of Ca2+ from bone, reabsorption in the kidney back into circulation, and increases in the activation of vitamin D3 to calcitriol. Calcitriol, the active form of vitamin D3, promotes absorption of calcium from the intestines and bones. Calcitonin secreted from the parafollicular cells of the thyroid gland also affects calcium levels by opposing parathyroid hormone; however, its physiological significance in humans is dubious. Intracellular calcium is stored in organelles which repetitively release and then reaccumulate Ca2+ ions in response to specific cellular events: storage sites include mitochondria and the endoplasmic reticulum. Characteristic concentrations of calcium in model organisms are: in E. coli 3mM (bound), 100nM (free), in budding yeast 2mM (bound), in mammalian cell 10-100nM (free) and in blood plasma 2mM. Humans In 2020, calcium was the 204th most commonly prescribed medication in the United States, with more than 2million prescriptions. Dietary recommendations The U.S. Institute of Medicine (IOM) established Recommended Dietary Allowanc" https://en.wikipedia.org/wiki/Cell%20cycle%20analysis,"Cell cycle analysis by DNA content measurement is a method that most frequently employs flow cytometry to distinguish cells in different phases of the cell cycle. Before analysis, the cells are usually permeabilised and treated with a fluorescent dye that stains DNA quantitatively, such as propidium iodide (PI) or 4,6-diamidino-2-phenylindole (DAPI). The fluorescence intensity of the stained cells correlates with the amount of DNA they contain. As the DNA content doubles during the S phase, the DNA content (and thereby intensity of fluorescence) of cells in the G0 phase and G1 phase (before S), in the S phase, and in the G2 phase and M phase (after S) identifies the cell cycle phase position in the major phases (G0/G1 versus S versus G2/M phase) of the cell cycle. The cellular DNA content of individual cells is often plotted as their frequency histogram to provide information about relative frequency (percentage) of cells in the major phases of the cell cycle. Cell cycle anomalies revealed on the DNA content frequency histogram are often observed after different types of cell damage, for example such DNA damage that interrupts the cell cycle progression at certain checkpoints. Such an arrest of the cell cycle progression can lead either to an effective DNA repair, which may prevent transformation of normal into a cancer cell (carcinogenesis), or to cell death, often by the mode of apoptosis. An arrest of cells in G0 or G1 is often seen as a result of lack of nutrients (growth factors), for example after serum deprivation. Cell cycle analysis was first described in 1969 at Los Alamos Scientific Laboratory by a group from the University of California using the Feulgen staining technique. The first protocol for cell cycle analysis using propidium iodide staining was presented in 1975 by Awtar Krishan from Harvard Medical School and is still widely cited today. Multiparameter analysis of the cell cycle includes, in addition to measurement of cellular DNA content, oth" https://en.wikipedia.org/wiki/Morphology%20%28biology%29,"Morphology is a branch of biology dealing with the study of the form and structure of organisms and their specific structural features. This includes aspects of the outward appearance (shape, structure, colour, pattern, size), i.e. external morphology (or eidonomy), as well as the form and structure of the internal parts like bones and organs, i.e. internal morphology (or anatomy). This is in contrast to physiology, which deals primarily with function. Morphology is a branch of life science dealing with the study of gross structure of an organism or taxon and its component parts. History The etymology of the word ""morphology"" is from the Ancient Greek (), meaning ""form"", and (), meaning ""word, study, research"". While the concept of form in biology, opposed to function, dates back to Aristotle (see Aristotle's biology), the field of morphology was developed by Johann Wolfgang von Goethe (1790) and independently by the German anatomist and physiologist Karl Friedrich Burdach (1800). Among other important theorists of morphology are Lorenz Oken, Georges Cuvier, Étienne Geoffroy Saint-Hilaire, Richard Owen, Karl Gegenbaur and Ernst Haeckel. In 1830, Cuvier and E.G.Saint-Hilaire engaged in a famous debate, which is said to exemplify the two major deviations in biological thinking at the time – whether animal structure was due to function or evolution. Divisions of morphology Comparative morphology is analysis of the patterns of the locus of structures within the body plan of an organism, and forms the basis of taxonomical categorization. Functional morphology is the study of the relationship between the structure and function of morphological features. Experimental morphology is the study of the effects of external factors upon the morphology of organisms under experimental conditions, such as the effect of genetic mutation. Anatomy is a ""branch of morphology that deals with the structure of organisms"". Molecular morphology is a rarely used term, usually r" https://en.wikipedia.org/wiki/Na%C3%AFve%20physics,"Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings. Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism. Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate. Examples Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature: What goes up must come down A dropped object falls straight down A solid object cannot pass through another solid object A vacuum sucks things towards it An object is either at rest or moving, in an absolute sense Two events are either simultaneous or they are not Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it. Psychological research The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, becau" https://en.wikipedia.org/wiki/Scalability,"Scalability is the property of a system to handle a growing amount of work. One definition for software systems specifies that this may be done by adding resources to the system. In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a package delivery system is scalable because more packages can be delivered by adding more delivery vehicles. However, if all packages had to first pass through a single warehouse for sorting, the system would not be as scalable, because one warehouse can handle only a limited number of packages. In computing, scalability is a characteristic of computers, networks, algorithms, networking protocols, programs and applications. An example is a search engine, which must support increasing numbers of users, and the number of topics it indexes. Webscale is a computer architectural approach that brings the capabilities of large-scale cloud computing companies into enterprise data centers. In distributed systems, there are several definitions according to the authors, some considering the concepts of scalability a sub-part of elasticity, others as being distinct. In mathematics, scalability mostly refers to closure under scalar multiplication. In industrial engineering and manufacturing, scalability refers to the capacity of a process, system, or organization to handle a growing workload, adapt to increasing demands, and maintain operational efficiency. A scalable system can effectively manage increased production volumes, new product lines, or expanding markets without compromising quality or performance. In this context, scalability is a vital consideration for businesses aiming to meet customer expectations, remain competitive, and achieve sustainable growth. Factors influencing scalability include the flexibility of the production process, the adaptability of the workforce, and the integration of advanced technologies. By implementing scalable solutions, c" https://en.wikipedia.org/wiki/Secure%20end%20node,"A Secure End Node is a trusted, individual computer that temporarily becomes part of a trusted, sensitive, well-managed network and later connects to many other (un)trusted networks/clouds. SEN's cannot communicate good or evil data between the various networks (e.g. exfiltrate sensitive information, ingest malware, etc.). SENs often connect through an untrusted medium (e.g. the Internet) and thus require a secure connection and strong authentication (of the device, software, user, environment, etc.). The amount of trust required (and thus operational, physical, personnel, network, and system security applied) is commensurate with the risk of piracy, tampering, and reverse engineering (within a given threat environment). An essential characteristic of SENs is they cannot persist information as they change between networks (or domains). The remote, private, and secure network might be organization's in-house network or a cloud service. A Secure End Node typically involves authentication of (i.e. establishing trust in) the remote computer's hardware, firmware, software, and/or user. In the future, the device-user's environment (location, activity, other people, etc.) as communicated by means of its (or the network's) trusted sensors (camera, microphone, GPS, radio, etc.) could provide another factor of authentication. A Secure End Node solves/mitigates end node problem. The common, but expensive, technique to deploy SENs is for the network owner to issue known, trusted, unchangeable hardware to users. For example, and assuming apriori access, a laptop's TPM chip can authenticate the hardware (likewise a user's smartcard authenticates the user). A different example is the DoD Software Protection Initiative's Cross Fabric Internet Browsing System that provides browser-only, immutable, anti-tamper thin clients to users Internet browsing. Another example is a non-persistent, remote client that boots over the network. A less secure but very low cost approach i" https://en.wikipedia.org/wiki/Index%20of%20physics%20articles,"Physics (Greek: physis–φύσις meaning ""nature"") is the natural science which examines basic concepts such as mass, charge, matter and its motion and all that derives from these, such as energy, force and spacetime. More broadly, it is the general analysis of nature, conducted in order to understand how the world and universe behave. The index of physics articles is split into multiple pages due to its size. To navigate by individual letter use the table of contents below. See also List of basic physics topics" https://en.wikipedia.org/wiki/Im%20schwarzen%20Walfisch%20zu%20Askalon,"""Im schwarzen Walfisch zu Askalon"" (""In the Black Whale of Ascalon"") is a popular academic commercium song. It was known as a beer-drinking song in many German speaking ancient universities. Joseph Victor von Scheffel provided the lyrics under the title Altassyrisch (Old Assyrian) 1854, the melody is from 1783 or earlier. Content The lyrics reflect an endorsement of the bacchanalian mayhem of student life, similar as in Gaudeamus igitur. The song describes an old Assyrian drinking binge of a man in an inn with some references to the Classics. The desks are made of marble and the large invoice is being provided in cuneiform on bricks. However the carouser has to admit that he left his money already in Nineveh. A Nubian house servant kicks him out then and the song closes with the notion, that (compare John 4:44) a prophet has no honor in his own country, if he doesn't pay cash for his consumption. Charles Godfrey Leland has translated the poems among other works of Scheffel. Each stanza begins with the naming verse ""Im Schwarzen Walfisch zu Askalon"", but varies the outcome. The ""Im"" is rather prolonged with the melody and increases the impact. Some of the stanzas: Im schwarzen Wallfisch zu Ascalon Da trank ein Mann drei Tag', Bis dass er steif wie ein Besenstiel Am Marmortische lag. 'In the Black Whale at Ascalon A man drank day by day, Till, stiff as any broom-handle, Upon the floor he lay. ... In the Black Whale at Ascalon The waiters brought the bill, In arrow-heads on six broad tiles To him who thus did swill. ... In the Black Whale at Ascalon No prophet hath renown; And he who there would drink in peace Must pay the money down. In typical manner of Scheffel, it contains an anachronistic mixture of various times and eras, parodistic notions on current science, as e.g. Historical criticism and interpretations of the Book of Jonah as a mere shipwrecking narrative. According to Scheffel, the guest didn't try to get back in the inn as „Aussi bin" https://en.wikipedia.org/wiki/Pacemaker%20failure,"Pacemaker failure is the inability of an implanted artificial pacemaker to perform its intended function of regulating the beating of the heart. A pacemaker uses electrical impulses delivered by electrodes in order to contract the heart muscles. Failure of a pacemaker is defined by the requirement of repeat surgical pacemaker-related procedures after the initial implantation. Most implanted pacemakers are dual chambered and have two leads, causing the implantation time to take longer because of this more complicated pacemaker system. These factors can contribute to an increased rate of complications which can lead to pacemaker failure. Approximately 2.25 million pacemakers were implanted in the United States between 1990 and 2002, and of those pacemakers, about 8,834 were removed from patients because of device malfunction most commonly connected to generator abnormalities. In the 1970s, results of an Oregon study indicated that 10% of implanted pacemakers failed within the first month. Another study found that more than half of pacemaker complications occurred during the first 3 months after implantation. Causes of pacemaker failure include lead related failure, unit malfunction, problems at the insertion site, failures related to exposure to high voltage electricity or high intensity microwaves, and a miscellaneous category (one patient had ventricular tachycardia when using his electric razor and another patient had persistent pacing of the diaphragm muscle). Pacemaker malfunction has the ability to cause serious injury or death, but if detected early enough, patients can continue with their needed therapy once complications are resolved. Symptoms Moderate dizziness or lightheadedness Syncope Slow or fast heart rate Discomfort in chest area Palpitations Hiccups Causes Direct factors Lead dislodgement A Macro-dislodgement is radiographically visible. A Micro-dislodgement is a minimal displacement in the lead that is not visible in a chest X-ray, but h" https://en.wikipedia.org/wiki/Pharos%20network%20coordinates,"Pharos is a hierarchical and decentralized network coordinate system. With the help of a simple two-level architecture, it achieves much better prediction accuracy then the representative Vivaldi coordinates, and it is incrementally deployable. Overview Network coordinate (NC) systems are an efficient mechanism for Internet latency prediction with scalable measurements. Vivaldi is the most common distributed NC system, and it is deployed in many well-known internet systems, such as Bamboo DHT (Distributed hash table), Stream-Based Overlay Network (SBON) and Azureus BitTorrent. Pharos is a fully decentralized NC system. All nodes in Pharos form two levels of overlays, namely a base overlay for long link prediction, and a local cluster overlay for short link prediction. The Vivaldi algorithm is applied to both the base overlay and the local cluster. As a result, each Pharos node has two sets of coordinates. The coordinates calculated in the base overlay, which are named global NC, are used for the global scale, and the coordinates calculated in the corresponding local cluster, which are named local NC, covers a smaller range of distance. To form the local cluster, Pharos uses a method similar to binning and chooses some nodes called anchors to help node clustering. This method only requires a one-time measurement (with possible periodic refreshes) by the client to a small, fixed set of anchors. Any stable nodes which are able to response ICMP ping message can serve as anchor, such as the existing DNS servers. The experimental results show that Pharos greatly outperforms Vivaldi in internet distance prediction without adding any significant overhead. Insights behind Pharos Simple and effective, obtain significant improvement in prediction accuracy by introducing a straightforward hierarchical distance prediction Fully compatible with Vivaldi, the most widely deployed NC system. For every host where the Vivaldi client has been deployed, it just needs to run " https://en.wikipedia.org/wiki/Nyquist%20stability%20criterion,"In control theory and stability theory, the Nyquist stability criterion or Strecker–Nyquist stability criterion, independently discovered by the German electrical engineer at Siemens in 1930 and the Swedish-American electrical engineer Harry Nyquist at Bell Telephone Laboratories in 1932, is a graphical technique for determining the stability of a dynamical system. Because it only looks at the Nyquist plot of the open loop systems, it can be applied without explicitly computing the poles and zeros of either the closed-loop or open-loop system (although the number of each type of right-half-plane singularities must be known). As a result, it can be applied to systems defined by non-rational functions, such as systems with delays. In contrast to Bode plots, it can handle transfer functions with right half-plane singularities. In addition, there is a natural generalization to more complex systems with multiple inputs and multiple outputs, such as control systems for airplanes. The Nyquist stability criterion is widely used in electronics and control system engineering, as well as other fields, for designing and analyzing systems with feedback. While Nyquist is one of the most general stability tests, it is still restricted to linear time-invariant (LTI) systems. Nevertheless, there are generalizations of the Nyquist criterion (and plot) for non-linear systems, such as the circle criterion and the scaled relative graph of a nonlinear operator. Additionally, other stability criteria like Lyapunov methods can also be applied for non-linear systems. Although Nyquist is a graphical technique, it only provides a limited amount of intuition for why a system is stable or unstable, or how to modify an unstable system to be stable. Techniques like Bode plots, while less general, are sometimes a more useful design tool. Nyquist plot A Nyquist plot is a parametric plot of a frequency response used in automatic control and signal processing. The most common use of Nyquist p" https://en.wikipedia.org/wiki/Univariate%20%28statistics%29,"Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed. Univariate data types Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types. Categorical univariate data Categorical univariate data consists of non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordinal scale of measurement. Numerical univariate data Numerical univariate data consists of observations that are numbers. They are obtained using either interval or ratio scale of measurement. This type of univariate data can be classified even further into two subcategories: discrete and continuous. A numerical univariate data is discrete if the set of all possible values is finite or countably infinite. Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people). Data analysis and applications Univariate analysis is the simplest form of analyzing data. Uni means ""one"", so the data has only one variable (univariate). Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more s" https://en.wikipedia.org/wiki/Queuing%20delay,"In telecommunication and computer engineering, the queuing delay or queueing delay is the time a job waits in a queue until it can be executed. It is a key component of network delay. In a switched network, queuing delay is the time between the completion of signaling by the call originator and the arrival of a ringing signal at the call receiver. Queuing delay may be caused by delays at the originating switch, intermediate switches, or the call receiver servicing switch. In a data network, queuing delay is the sum of the delays between the request for service and the establishment of a circuit to the called data terminal equipment (DTE). In a packet-switched network, queuing delay is the sum of the delays encountered by a packet between the time of insertion into the network and the time of delivery to the address. Router processing This term is most often used in reference to routers. When packets arrive at a router, they have to be processed and transmitted. A router can only process one packet at a time. If packets arrive faster than the router can process them (such as in a burst transmission) the router puts them into the queue (also called the buffer) until it can get around to transmitting them. Delay can also vary from packet to packet so averages and statistics are usually generated when measuring and evaluating queuing delay. As a queue begins to fill up due to traffic arriving faster than it can be processed, the amount of delay a packet experiences going through the queue increases. The speed at which the contents of a queue can be processed is a function of the transmission rate of the facility. This leads to the classic delay curve. The average delay any given packet is likely to experience is given by the formula 1/(μ-λ) where μ is the number of packets per second the facility can sustain and λ is the average rate at which packets are arriving to be serviced. This formula can be used when no packets are dropped from the queue. The maximum que" https://en.wikipedia.org/wiki/Machine%20Check%20Architecture,"In computing, Machine Check Architecture (MCA) is an Intel and AMD mechanism in which the CPU reports hardware errors to the operating system. Intel's P6 and Pentium 4 family processors, AMD's K7 and K8 family processors, as well as the Itanium architecture implement a machine check architecture that provides a mechanism for detecting and reporting hardware (machine) errors, such as: system bus errors, ECC errors, parity errors, cache errors, and translation lookaside buffer errors. It consists of a set of model-specific registers (MSRs) that are used to set up machine checking and additional banks of MSRs used for recording errors that are detected. See also Machine-check exception (MCE) High availability (HA) Reliability, availability and serviceability (RAS) Windows Hardware Error Architecture (WHEA)" https://en.wikipedia.org/wiki/Full-employment%20theorem,"In computer science and mathematics, a full employment theorem is a term used, often humorously, to refer to a theorem which states that no algorithm can optimally perform a particular task done by some class of professionals. The name arises because such a theorem ensures that there is endless scope to keep discovering new techniques to improve the way at least some specific task is done. For example, the full employment theorem for compiler writers states that there is no such thing as a provably perfect size-optimizing compiler, as such a proof for the compiler would have to detect non-terminating computations and reduce them to a one-instruction infinite loop. Thus, the existence of a provably perfect size-optimizing compiler would imply a solution to the halting problem, which cannot exist. This also implies that there may always be a better compiler since the proof that one has the best compiler cannot exist. Therefore, compiler writers will always be able to speculate that they have something to improve. A similar example in practical computer science is the idea of no free lunch in search and optimization, which states that no efficient general-purpose solver can exist, and hence there will always be some particular problem whose best known solution might be improved. Similarly, Gödel's incompleteness theorems have been called full employment theorems for mathematicians. Tasks such as virus writing and detection, and spam filtering and filter-breaking are also subject to Rice's theorem." https://en.wikipedia.org/wiki/Putrefaction,"Putrefaction is the fifth stage of death, following pallor mortis, livor mortis, algor mortis, and rigor mortis. This process references the breaking down of a body of an animal post-mortem. In broad terms, it can be viewed as the decomposition of proteins, and the eventual breakdown of the cohesiveness between tissues, and the liquefaction of most organs. This is caused by the decomposition of organic matter by bacterial or fungal digestion, which causes the release of gases that infiltrate the body's tissues, and leads to the deterioration of the tissues and organs. The approximate time it takes putrefaction to occur is dependent on various factors. Internal factors that affect the rate of putrefaction include the age at which death has occurred, the overall structure and condition of the body, the cause of death, and external injuries arising before or after death. External factors include environmental temperature, moisture and air exposure, clothing, burial factors, and light exposure. Body farms are facilities that study the way various factors affect the putrefaction process. The first signs of putrefaction are signified by a greenish discoloration on the outside of the skin on the abdominal wall corresponding to where the large intestine begins, as well as under the surface of the liver. Certain substances, such as carbolic acid, arsenic, strychnine, and zinc chloride, can be used to delay the process of putrefaction in various ways based on their chemical make up. Description In thermodynamic terms, all organic tissues are composed of chemical energy, which, when not maintained by the constant biochemical maintenance of the living organism, begin to chemically break down due to the reaction with water into amino acids, known as hydrolysis. The breakdown of the proteins of a decomposing body is a spontaneous process. Protein hydrolysis is accelerated as the anaerobic bacteria of the digestive tract consume, digest, and excrete the cellular proteins of th" https://en.wikipedia.org/wiki/Tagged%20architecture,"In computer science, a tagged architecture is a type of computer architecture where every word of memory constitutes a tagged union, being divided into a number of bits of data, and a tag section that describes the type of the data: how it is to be interpreted, and, if it is a reference, the type of the object that it points to. Architecture In contrast, program and data memory are indistinguishable in the von Neumann architecture, making the way the memory is referenced critical to interpret the correct meaning. Notable examples of American tagged architectures were the Lisp machines, which had tagged pointer support at the hardware and opcode level, the Burroughs large systems, which have a data-driven tagged and descriptor-based architecture, and the non-commercial Rice Computer. Both the Burroughs and Lisp machine are examples of high-level language computer architectures, where the tagging is used to support types from a high-level language at the hardware level. In addition to this, the original Xerox Smalltalk implementation used the least-significant bit of each 16-bit word as a tag bit: if it was clear then the hardware would accept it as an aligned memory address while if it was set it was treated as a (shifted) 15-bit integer. Current Intel documentation mentions that the lower bits of a memory address might be similarly used by some interpreter-based systems. In the Soviet Union, the Elbrus series of supercomputers pioneered the use of tagged architectures in 1973. See also Executable-space protection Harvard architecture" https://en.wikipedia.org/wiki/Quantum%20non-equilibrium,"Quantum non-equilibrium is a concept within stochastic formulations of the De Broglie–Bohm theory of quantum physics. Overview In quantum mechanics, the Born rule states that the probability density of finding a system in a given state, when measured, is proportional to the square of the amplitude of the system's wavefunction at that state, and it constitutes one of the fundamental axioms of the theory. This is not the case for the De Broglie–Bohm theory, where the Born rule is not a basic law. Rather, in this theory the link between the probability density and the wave function has the status of a hypothesis, called the quantum equilibrium hypothesis, which is additional to the basic principles governing the wave function, the dynamics of the quantum particles and the Schrödinger equation. (For mathematical details, refer to the derivation by Peter R. Holland.) Accordingly, quantum non-equilibrium describes a state of affairs where the Born rule is not fulfilled; that is, the probability to find the particle in the differential volume at time t is unequal to Recent advances in investigations into properties of quantum non-equilibrium states have been performed mainly by theoretical physicist Antony Valentini, and earlier steps in this direction were undertaken by David Bohm, Jean-Pierre Vigier, Basil Hiley and Peter R. Holland. The existence of quantum non-equilibrium states has not been verified experimentally; quantum non-equilibrium is so far a theoretical construct. The relevance of quantum non-equilibrium states to physics lies in the fact that they can lead to different predictions for results of experiments, depending on whether the De Broglie–Bohm theory in its stochastic form or the Copenhagen interpretation is assumed to describe reality. (The Copenhagen interpretation, which stipulates the Born rule a priori, does not foresee the existence of quantum non-equilibrium states at all.) That is, properties of quantum non-equilibrium can make certain cla" https://en.wikipedia.org/wiki/Restricted%20isometry%20property,"In linear algebra, the restricted isometry property (RIP) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence Tao and is used to prove many theorems in the field of compressed sensing. There are no known large matrices with bounded restricted isometry constants (computing these constants is strongly NP-hard, and is hard to approximate as well), but many random matrices have been shown to remain bounded. In particular, it has been shown that with exponentially high probability, random Gaussian, Bernoulli, and partial Fourier matrices satisfy the RIP with number of measurements nearly linear in the sparsity level. The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices. Web forms to evaluate bounds for the Gaussian ensemble are available at the Edinburgh Compressed Sensing RIC page. Definition Let A be an m × p matrix and let 1 ≤ s ≤ p be an integer. Suppose that there exists a constant such that, for every m × s submatrix As of A and for every s-dimensional vector y, Then, the matrix A is said to satisfy the s-restricted isometry property with restricted isometry constant . This condition is equivalent to the statement that for every m × s submatrix As of A we have where is the identity matrix and is the operator norm. See for example for a proof. Finally this is equivalent to stating that all eigenvalues of are in the interval . Restricted Isometric Constant (RIC) The RIC Constant is defined as the infimum of all possible for a given . It is denoted as . Eigenvalues For any matrix that satisfies the RIP property with a RIC of , the following condition holds: . The tightest upper bound on the RIC can be computed for Gaussian matrices. This can be achieved by computing the exact probability that all the eigenvalues of Wishart matrices lie within an interval. See also Compressed sensing Mutual coh" https://en.wikipedia.org/wiki/Video%20super-resolution,"Video super-resolution (VSR) is the process of generating high-resolution video frames from the given low-resolution video frames. Unlike single-image super-resolution (SISR), the main goal is not only to restore more fine details while saving coarse ones, but also to preserve motion consistency. There are many approaches for this task, but this problem still remains to be popular and challenging. Mathematical explanation Most research considers the degradation process of frames as where: — original high-resolution frame sequence, — blur kernel, — convolution operation, — downscaling operation, — additive noise, — low-resolution frame sequence. Super-resolution is an inverse operation, so its problem is to estimate frame sequence from frame sequence so that is close to original . Blur kernel, downscaling operation and additive noise should be estimated for given input to achieve better results. Video super-resolution approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension. Complex designs are not uncommon. Some most essential components for VSR are guided by four basic functionalities: Propagation, Alignment, Aggregation, and Upsampling. Propagation refers to the way in which features are propagated temporally Alignment concerns on the spatial transformation applied to misaligned images/features Aggregation defines the steps to combine aligned features Upsampling describes the method to transform the aggregated features to the final output image Methods When working with video, temporal information could be used to improve upscaling quality. Single image super-resolution methods could be used too, generating high-resolution frames independently from their neighbours, but it's less effective and introduces temporal instability. There are a few traditional methods, which consider the video super-resolution task as an optimization problem. Last years deep learning based methods" https://en.wikipedia.org/wiki/Haldane%27s%20rule,"Haldane's rule is an observation about the early stage of speciation, formulated in 1922 by the British evolutionary biologist J. B. S. Haldane, that states that if — in a species hybrid — only one sex is inviable or sterile, that sex is more likely to be the heterogametic sex. The heterogametic sex is the one with two different sex chromosomes; in therian mammals, for example, this is the male. Overview Haldane himself described the rule as: Haldane's rule applies to the vast majority of heterogametic organisms. This includes the case where two species make secondary contact in an area of sympatry and form hybrids after allopatric speciation has occurred. The rule includes both male heterogametic (XY or XO-type sex determination, such as found in mammals and Drosophila fruit flies) and female heterogametic (ZW or Z0-type sex determination, as found in birds and butterflies), and some dioecious plants such as campions. Hybrid dysfunction (sterility and inviability) is a major form of post-zygotic reproductive isolation, which occurs in early stages of speciation. Evolution can produce a similar pattern of isolation in a vast array of different organisms. However, the actual mechanisms leading to Haldane's rule in different taxa remain largely undefined. Hypotheses Many different hypotheses have been advanced to address the evolutionary mechanisms to produce Haldane's rule. Currently, the most popular explanation for Haldane's rule is the composite hypothesis, which divides Haldane's rule into multiple subdivisions, including sterility, inviability, male heterogamety, and female heterogamety. The composite hypothesis states that Haldane's rule in different subdivisions has different causes. Individual genetic mechanisms may not be mutually exclusive, and these mechanisms may act together to cause Haldane's rule in any given subdivision. In contrast to these views that emphasize genetic mechanisms, another view hypothesizes that population dynamics during populat" https://en.wikipedia.org/wiki/Patch%20dynamics,"Patch dynamics is an ecological perspective that the structure, function, and dynamics of ecological systems can be understood through studying their interactive patches. Patch dynamics, as a term, may also refer to the spatiotemporal changes within and among patches that make up a landscape. Patch dynamics is ubiquitous in terrestrial and aquatic systems across organizational levels and spatial scales. From a patch dynamics perspective, populations, communities, ecosystems, and landscapes may all be studied effectively as mosaics of patches that differ in size, shape, composition, history, and boundary characteristics. The idea of patch dynamics dates back to the 1940s when plant ecologists studied the structure and dynamics of vegetation in terms of the interactive patches that it comprises. A mathematical theory of patch dynamics was developed by Simon Levin and Robert Paine in the 1970s, originally to describe the pattern and dynamics of an intertidal community as a patch mosaic created and maintained by tidal disturbances. Patch dynamics became a dominant theme in ecology between the late 1970s and the 1990s. Patch dynamics is a conceptual approach to ecosystem and habitat analysis that emphasizes dynamics of heterogeneity within a system (i.e. that each area of an ecosystem is made up of a mosaic of small 'sub-ecosystems'). Diverse patches of habitat created by natural disturbance regimes are seen as critical to the maintenance of this diversity (ecology). A habitat patch is any discrete area with a definite shape, spatial and configuration used by a species for breeding or obtaining other resources. Mosaics are the patterns within landscapes that are composed of smaller elements, such as individual forest stands, shrubland patches, highways, farms, or towns. Patches and mosaics Historically, due to the short time scale of human observation, mosaic landscapes were perceived to be static patterns of human population mosaics. This focus centered o" https://en.wikipedia.org/wiki/Up%20tack,"The up tack or falsum (⊥, \bot in LaTeX, U+22A5 in Unicode) is a constant symbol used to represent: The truth value 'false', or a logical constant denoting a proposition in logic that is always false (often called ""falsum"" or ""absurdum""). The bottom element in wheel theory and lattice theory, which also represents absurdum when used for logical semantics The bottom type in type theory, which is the bottom element in the subtype relation. This may coincide with the empty type, which represents absurdum under the Curry–Howard correspondence The ""undefined value"" in quantum physics interpretations that reject counterfactual definiteness, as in (r0,⊥) as well as Mixed radix decoding in the APL programming language The glyph of the up tack appears as an upside-down tee symbol, and as such is sometimes called eet (the word ""tee"" in reverse). Tee plays a complementary or dual role in many of these theories. The similar-looking perpendicular symbol (⟂, \perp in LaTeX, U+27C2 in Unicode) is a binary relation symbol used to represent: Perpendicularity of lines in geometry Orthogonality in linear algebra Independence of random variables in probability theory Coprimality in number theory The double tack up symbol (⫫, U+2AEB in Unicode) is a binary relation symbol used to represent: Conditional independence of random variables in probability theory See also Alternative plus sign Contradiction List of mathematical symbols Tee (symbol) (⊤) Notes Mathematical notation Mathematical symbols Logic symbols" https://en.wikipedia.org/wiki/Ingredient-flavor%20network,"In network science, ingredient-flavor networks are networks describing the sharing of flavor compounds of culinary ingredients. In the bipartite form, an ingredient-flavor network consist of two different types of nodes: the ingredients used in the recipes and the flavor compounds that contributes to the flavor of each ingredients. The links connecting different types of nodes are undirected, represent certain compound occur in each ingredients. The ingredient-flavor network can also be projected in the ingredient or compound space where nodes are ingredients or compounds, links represents the sharing of the same compounds to different ingredients or the coexistence in the same ingredient of different compounds. History In 2011, Yong-Yeol Ahn, Sebastian E. Ahnert, James P. Bagrow and Albert-László Barabási investigated the ingredient-flavor networks of North American, Latin American, Western European, Southern European and East Asian cuisines. Based on culinary repository epicurious.com, allrecipes.com and menupan.com, 56,498 recipes were included in the survey. The efforts to apply network analysis on foods also occurred in the work of Kinouchi and Chun-Yuen Teng, with the former examined the relationship between ingredients and recipes, and the latter derived the ingredient-ingredient networks of both compliments and substitutions. Yet Ahn's ingredient-flavor network was constructed based on the molecular level understanding of culinary networks and received wide attention Properties According to Ahn, in the total number of 56,498 recipes studied, 381 ingredients and 1021 flavor compounds were identified. On average, each ingredient connected to 51 flavor compounds. It was found that in comparison with random pairing of ingredients and flavor compounds, North American cuisines tend to share more compounds while East Asian cuisines tend to share fewer compounds. It was also shown that this tendency was mostly generated by the frequently used ingredients in e" https://en.wikipedia.org/wiki/Astrobiology,"Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missi" https://en.wikipedia.org/wiki/Rasta%20filtering,"RASTA-filtering and Mean Subtraction was introduced to support Perceptual Linear Prediction (PLP) preprocessing. It uses bandpass filtering in the log spectral domain. Rasta filtering then removes slow channel variations. It has also been applied to cepstrum feature-based preprocessing with both log spectral and cepstral domain filtering. In general a RASTA filter is defined by The numerator is a regression filter with N being the order (must be odd) and the denominator is an integrator with time decay. The pole controls the lower limit of frequency and is normally around 0.9. RASTA-filtering can be changed to use mean subtraction, implementing a moving average filter. Filtering is normally performed in the cepstral domain. The mean becomes the long term cepstrum and is typically computed on the speech part for each separate utterance. A silence is necessary to detect each utterance." https://en.wikipedia.org/wiki/K%C3%BCpfm%C3%BCller%27s%20uncertainty%20principle,"Küpfmüller's uncertainty principle by Karl Küpfmüller in the year 1924 states that the relation of the rise time of a bandlimited signal to its bandwidth is a constant. with either or Proof A bandlimited signal with fourier transform in frequency space is given by the multiplication of any signal with with a rectangular function of width as (applying the convolution theorem) Since the fourier transform of a rectangular function is a sinc function and vice versa, follows Now the first root of is at , which is the rise time of the pulse , now follows Equality is given as long as is finite. Regarding that a real signal has both positive and negative frequencies of the same frequency band, becomes , which leads to instead of See also Heisenberg's uncertainty principle" https://en.wikipedia.org/wiki/Lightweight%20Presentation%20Protocol,"Lightweight Presentation Protocol (LPP) is a protocol used to provide ISO presentation services on top of TCP/IP based protocol stacks. It is defined in RFC 1085. The Lightweight Presentation Protocol describes an approach for providing ""streamlined"" support of OSI model-conforming application services on top of TCP/IP-based network for some constrained environments. It was initially derived from a requirement to run the ISO Common Management Information Protocol (CMIP) in TCP/IP-based networks." https://en.wikipedia.org/wiki/Dot%20planimeter,"A dot planimeter is a device used in planimetrics for estimating the area of a shape, consisting of a transparent sheet containing a square grid of dots. To estimate the area of a shape, the sheet is overlaid on the shape and the dots within the shape are counted. The estimate of area is the number of dots counted multiplied by the area of a single grid square. In some variations, dots that land on or near the boundary of the shape are counted as half of a unit. The dots may also be grouped into larger square groups by lines drawn onto the transparency, allowing groups that are entirely within the shape to be added to the count rather than requiring their dots to be counted one by one. The estimation of area by means of a dot grid has also been called the dot grid method or (particularly when the alignment of the grid with the shape is random) systematic sampling. Perhaps because of its simplicity, it has been repeatedly reinvented. Application In forestry, cartography, and geography, the dot planimeter has been applied to maps to estimate the area of parcels of land. In botany and horticulture, it has been applied directly to sampled leaves to estimate the average leaf area. In medicine, it has been applied to Lashley diagrams as an estimate of the size of brain lesions. In mineralogy, a similar technique of counting dots in a grid is applied to cross-sections of rock samples for a different purpose, estimating the relative proportions of different constituent minerals. Theory Greater accuracy can be achieved by using a dot planimeter with a finer grid of dots. Alternatively, repeatedly placing a dot planimeter with different irrational offsets from its previous placement, and averaging the resulting measurements, can lead to a set of sampled measurements whose average tends towards the true area of the measured shape. The method using a finer grid tends to have better statistical efficiency than repeated measurement with random placements. According to Pick'" https://en.wikipedia.org/wiki/Phase%20space,"In dynamical systems theory and control theory, a phase space or state space is a space in which all possible ""states"" of a dynamical system or a control system are represented, with each possible state corresponding to one unique point in the phase space. For mechanical systems, the phase space usually consists of all possible values of position and momentum variables. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs. Principles In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and al" https://en.wikipedia.org/wiki/Square%20root%20of%207,"The square root of 7 is the positive real number that, when multiplied by itself, gives the prime number 7. It is more precisely called the principal square root of 7, to distinguish it from the negative number with the same property. This number appears in various geometric and number-theoretic contexts. It can be denoted in surd form as: and in exponent form as: It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.646 to within about 99.99% accuracy (about 1 part in 10000); that is, it differs from the correct value by about . The approximation (≈ 2.645833...) is better: despite having a denominator of only 48, it differs from the correct value by less than , or less than one part in 33,000. More than a million decimal digits of the square root of seven have been published. Rational approximations The extraction of decimal-fraction approximations to square roots by various methods has used the square root of 7 as an example or exercise in textbooks, for hundreds of years. Different numbers of digits after the decimal point are shown: 5 in 1773 and 1852, 3 in 1835, 6 in 1808, and 7 in 1797. An extraction by Newton's method (approximately) was illustrated in 1922, concluding that it is 2.646 ""to the nearest thousandth"". For a family of good rational approximations, the square root of 7 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 3, 5, 8, 37, 45, 82, 127, 590, 717, 1307, 2024, 9403, 11427, 20830, 32257… , and their denominators are 1, 1, 2, 3, 14, 17, 31, 48, 223, 271, 494, 765, 3554, 4319, 7873, 12192,…. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Approximate decimal equivalents improve linearly (number of digits proportional to convergent number) at" https://en.wikipedia.org/wiki/MONA%20number,"A MONA number (short for Moths of North America), or Hodges number after Ronald W. Hodges, is part of a numbering system for North American moths found north of Mexico in the Continental United States and Canada, as well as the island of Greenland. Introduced in 1983 by Hodges through the publication of Check List of the Lepidoptera of America North of Mexico, the system began an ongoing numeration process in order to compile a list of the over 12,000 moths of North America north of Mexico. The system numbers moths within the same family close together for identification purposes. For example, the species Epimartyria auricrinella begins the numbering system at 0001 while Epimartyria pardella is numbered 0002. The system has become somewhat out of date since its inception for several reasons: Some numbers no longer exist as the species bearing the number have been reclassified into other species. Some species have been regrouped into a different family and their MONA numbers are out of order taxonomically. New species have been discovered since the implementation of the MONA system, resulting in the usage of decimal numbers as to not disrupt the numbering of other species. Despite the issues above, the MONA system has remained popular with many websites and publications. It is the most popular numbering system used, largely replacing the older McDunnough Numbers system, while some published lists prefer to use other forms of compilation. The Moth Photographer's Group (MPG) at Mississippi State University actively monitors the expansive list of North American moths utilizing the MONA system and updates their checklists in accordance with publishings regarding changes and additions." https://en.wikipedia.org/wiki/List%20of%20oldest%20fathers,"This is a list of persons reported to have become father of a child at or after 75 years of age. These claims have not necessarily been verified. Medical considerations According to a 1969 study, there is a decrease in sperm concentration as men age. The study reported that 90% of seminiferous tubules in men in their 20s and 30s contained spermatids, whereas men in their 40s and 50s had spermatids in 50% of their seminiferous tubules. In the study, only 10% of seminiferous tubules from men aged > 80 years contained spermatids. In a random international sample of 11,548 men confirmed to be biological fathers by DNA paternity testing, the oldest father was found to be 66 years old at the birth of his child; the ratio of DNA-confirmed versus DNA-rejected paternity tests around that age is in agreement with the notion of general male infertility greater than age 65-66. List of claims See also List of oldest birth mothers List of people with the most children List of multiple births Pregnancy Abraham and his son Isaac Genealogies of Genesis including multiple accounts of super-aged fathers" https://en.wikipedia.org/wiki/Hardware%20architect,"(In the automation and engineering environments, the hardware engineer or architect encompasses the electronics engineering and electrical engineering fields, with subspecialities in analog, digital, or electromechanical systems.) The hardware systems architect or hardware architect is responsible for: Interfacing with a systems architect or client stakeholders. It is extraordinarily rare nowadays for sufficiently large and/or complex hardware systems that require a hardware architect not to require substantial software and a systems architect. The hardware architect will therefore normally interface with a systems architect, rather than directly with user(s), sponsor(s), or other client stakeholders. However, in the absence of a systems architect, the hardware systems architect must be prepared to interface directly with the client stakeholders in order to determine their (evolving) needs to be realized in hardware. The hardware architect may also need to interface directly with a software architect or engineer(s), or with other mechanical or electrical engineers. Generating the highest level of hardware requirements, based on the user's needs and other constraints such as cost and schedule. Ensuring that this set of high level requirements is consistent, complete, correct, and operationally defined. Performing cost–benefit analyses to determine the best methods or approaches for meeting the hardware requirements; making maximum use of commercial off-the-shelf or already developed components. Developing partitioning algorithms (and other processes) to allocate all present and foreseeable (hardware) requirements into discrete hardware partitions such that a minimum of communications is needed among partitions, and between the user and the system. Partitioning large hardware systems into (successive layers of) subsystems and components each of which can be handled by a single hardware engineer or team of engineers. Ensuring that maximally robust hardware architec" https://en.wikipedia.org/wiki/Single%20instruction%2C%20multiple%20threads,"Single instruction, multiple threads (SIMT) is an execution model used in parallel computing where single instruction, multiple data (SIMD) is combined with multithreading. It is different from SPMD in that all instructions in all ""threads"" are executed in lock-step. The SIMT execution model has been implemented on several GPUs and is relevant for general-purpose computing on graphics processing units (GPGPU), e.g. some supercomputers combine CPUs with GPUs. The processors, say a number of them, seem to execute many more than tasks. This is achieved by each processor having multiple ""threads"" (or ""work-items"" or ""Sequence of SIMD Lane operations""), which execute in lock-step, and are analogous to SIMD lanes. The simplest way to understand SIMT is to imagine a multi-core system, where each core has its own register file, its own ALUs (both SIMD and Scalar) and its own data cache, but that unlike a standard multi-core system which has multiple independent instruction caches and decoders, as well as multiple independent Program Counter registers, the instructions are synchronously broadcast to all SIMT cores from a single unit with a single instruction cache and a single instruction decoder which reads instructions using a single Program Counter. The key difference between SIMT and SIMD lanes is that each of the SIMT cores may have a completely different Stack Pointer (and thus perform computations on completely different data sets), whereas SIMD lanes are simply part of an ALU that knows nothing about memory per se. History SIMT was introduced by Nvidia in the Tesla GPU microarchitecture with the G80 chip. ATI Technologies, now AMD, released a competing product slightly later on May 14, 2007, the TeraScale 1-based ""R600"" GPU chip. Description As access time of all the widespread RAM types (e.g. DDR SDRAM, GDDR SDRAM, XDR DRAM, etc.) is still relatively high, engineers came up with the idea to hide the latency that inevitably comes with each memory access. St" https://en.wikipedia.org/wiki/Structural%20synthesis%20of%20programs,"Structural synthesis of programs (SSP) is a special form of (automatic) program synthesis that is based on propositional calculus. More precisely, it uses intuitionistic logic for describing the structure of a program in such a detail that the program can be automatically composed from pieces like subroutines or even computer commands. It is assumed that these pieces have been implemented correctly, hence no correctness verification of these pieces is needed. SSP is well suited for automatic composition of services for service-oriented architectures and for synthesis of large simulation programs. History Automatic program synthesis began in the artificial intelligence field, with software intended for automatic problem solving. The first program synthesizer was developed by Cordell Green in 1969. At about the same time, mathematicians including R. Constable, Z. Manna, and R. Waldinger explained the possible use of formal logic for automatic program synthesis. Practically applicable program synthesizers appeared considerably later. The idea of structural synthesis of programs was introduced at a conference on algorithms in modern mathematics and computer science organized by Andrey Ershov and Donald Knuth in 1979. The idea originated from G. Pólya’s well-known book on problem solving. The method for devising a plan for solving a problem in SSP was presented as a formal system. The inference rules of the system were restructured and justified in logic by G. Mints and E. Tyugu in 1982. A programming tool PRIZ that uses SSP was developed in the 1980s. A recent Integrated development environment that supports SSP is CoCoViLa — a model-based software development platform for implementing domain specific languages and developing large Java programs. The logic of SSP Structural synthesis of programs is a method for composing programs from already implemented components (e.g. from computer commands or software object methods) that can be considered as functions." https://en.wikipedia.org/wiki/Proof%20%28play%29,"Proof is a 2000 play by the American playwright David Auburn. Proof was developed at George Street Playhouse in New Brunswick, New Jersey, during the 1999 Next Stage Series of new plays. The play premiered Off-Broadway in May 2000 and transferred to Broadway in October 2000. The play won the 2001 Pulitzer Prize for Drama and the Tony Award for Best Play. Plot The play focuses on Catherine, the daughter of Robert, a recently deceased mathematical genius in his fifties and professor at the University of Chicago, and her struggle with mathematical genius and mental illness. Catherine had cared for her father through a lengthy mental illness. Upon Robert's death, his ex-graduate student Hal discovers a paradigm-shifting proof about prime numbers in Robert's office. The title refers both to that proof and to the play's central question: Can Catherine prove the proof's authorship? Along with demonstrating the proof's authenticity, Catherine also finds herself in a relationship with Hal. Throughout, the play explores Catherine's fear of following in her father's footsteps, both mathematically and mentally and her desperate attempts to stay in control. Act I The play opens with Catherine sitting in the backyard of her large, old house. Robert, her father, reveals a bottle of champagne to help celebrate her 25th birthday. Catherine complains that she hasn't done any worthwhile work in the field of mathematics, at least not to the same level as her father, a well-known math genius. He reassures her that she can still do good work as long as she stops sleeping until noon and wasting time reading magazines. Catherine confesses she is worried about inheriting Robert's inclination towards mental instability. He begins to comfort her but then alludes to a ""bad sign"" when he points out that he did, in fact, die a week ago. Robert disappears as Catherine dozes off. She awakens when Hal, one of Robert's students, exits the house. He has been studying the hundreds of notebooks Robe" https://en.wikipedia.org/wiki/List%20of%20mathematic%20operators,"In mathematics, an operator or transform is a function from one space of functions to another. Operators occur commonly in engineering, physics and mathematics. Many are integral operators and differential operators. In the following L is an operator which takes a function to another function . Here, and are some unspecified function spaces, such as Hardy space, Lp space, Sobolev space, or, more vaguely, the space of holomorphic functions. See also List of transforms List of Fourier-related transforms Transfer operator Fredholm operator Borel transform Glossary of mathematical symbols Operators Operators Operators" https://en.wikipedia.org/wiki/Taxon%20in%20disguise,"In bacteriology, a taxon in disguise is a species, genus or higher unit of biological classification whose evolutionary history reveals has evolved from another unit of a similar or lower rank, making the parent unit paraphyletic. That happens when rapid evolution makes a new species appear so radically different from the ancestral group that it is not (initially) recognised as belonging to the parent phylogenetic group, which is left as an evolutionary grade. While the term is from bacteriology, parallel examples are found throughout the tree of life. For example, four-footed animals have evolved from piscine ancestors but since they are not generally considered fish, they can be said to be ""fish in disguise"". In many cases, the paraphyly can be resolved by reclassifying the taxon in question under the parent group. However, in bacteriology, since renaming groups may have serious consequences since by causing confusion over the identity of pathogens, it is generally avoided for some groups. Examples Shigella The bacterial genus Shigella is the cause of bacillary dysentery, a potentially-severe infection that kills over a million people every year. The genus (S. dysenteriae, S. flexneri, S. boydii, S. sonnei) have evolved from the common intestinal bacterium Escherichia coli, which renders that species paraphyletic. E. coli itself can also cause serious dysentery, but differences in genetic makeup between E. coli and Shigella cause different medical conditions and symptoms. Escherichia coli is a badly-classified species since some strains share only 20% of their genome. It is so diverse that it should be given a higher taxonomic ranking. However, medical conditions associated with E. coli itself and Shigella make the current classification not to be changed to avoid confusion in medical context. Shigella will thus remain ""E. coli in disguise"". B. cereus-group Similarly, the Bacillus species of the B. cereus-group (B. anthracis, B. cereus, B . thuringiensis" https://en.wikipedia.org/wiki/Behavior-based%20robotics,"Behavior-based robotics (BBR) or behavioral robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviors despite little internal variable state to model its immediate environment, mostly gradually correcting its actions via sensory-motor links. Principles Behavior-based robotics sets itself apart from traditional artificial intelligence by using biological systems as a model. Classic artificial intelligence typically uses a set of steps to solve problems, it follows a path based on internal representations of events compared to the behavior-based approach. Rather than use preset calculations to tackle a situation, behavior-based robotics relies on adaptability. This advancement has allowed behavior-based robotics to become commonplace in researching and data gathering. Most behavior-based systems are also reactive, which means they need no programming of a chair looks like, or what kind of surface the robot is moving on. Instead, all the information is gleaned from the input of the robot's sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment. Behavior-based robots (BBR) usually show more biological-appearing actions than their computing-intensive counterparts, which are very deliberate in their actions. A BBR often makes mistakes, repeats actions, and appears confused, but can also show the anthropomorphic quality of tenacity. Comparisons between BBRs and insects are frequent because of these actions. BBRs are sometimes considered examples of weak artificial intelligence, although some have claimed they are models of all intelligence. Features Most behavior-based robots are programmed with a basic set of features to start them off. They are given a behavioral repertoire to work with dictating what behaviors to use and when, obstacle avoidance and battery charging can provide a foundation to help the robots learn and succeed. Rather than buil" https://en.wikipedia.org/wiki/Optical%20interconnect,"In integrated circuits, optical interconnects refers to any system of transmitting signals from one part of an integrated circuit to another using light. Optical interconnects have been the topic of study due to the high latency and power consumption incurred by conventional metal interconnects in transmitting electrical signals over long distances, such as in interconnects classed as global interconnects. The International Technology Roadmap for Semiconductors (ITRS) has highlighted interconnect scaling as a problem for the semiconductor industry. In electrical interconnects, nonlinear signals (e.g. digital signals) are transmitted by copper wires conventionally, and these electrical wires all have resistance and capacitance which severely limits the rise time of signals when the dimension of the wires are scaled down. Optical solution are used to transmit signals through long distances to substitute interconnection between dies within the integrated circuit (IC) package. In order to control the optical signals inside the small IC package properly, microelectromechanical system (MEMS) technology can be used to integrate the optical components (i.e. optical waveguides, optical fibers, lens, mirrors, optical actuators, optical sensors etc.) and the electronic parts together effectively. Problems of the current interconnect in the package Conventional physical metal wires possess both resistance and capacitance, limiting the rise time of signals. Bits of information will overlap with each other when the frequency of signal is increased to a certain level. Benefits of using optical interconnection Optical interconnections can provide benefits over conventional metal wires which include: More predictable timing Reduction of power and area for clock distribution Distance independence of performance of optical interconnects No frequency-dependent Cross-talk Architectural advantages Reducing power dissipation in interconnects Voltage isolation Density of inte" https://en.wikipedia.org/wiki/NPL%20network,"The NPL network, or NPL Data Communications Network, was a local area computer network operated by a team from the National Physical Laboratory in London that pioneered the concept of packet switching. Based on designs first conceived by Donald Davies in 1965, development work began in 1968. Elements of the first version of the network, the Mark I, became operational during 1969 then fully operational in January 1970, and the Mark II version operated from 1973 until 1986. The NPL network followed by the ARPANET in the United States were the first two computer networks that implemented packet switching and the NPL network was the first to use high-speed links. It was, along with the ARPANET project, laid down the technical foundations of modern internet. Origins In 1965, Donald Davies, who was later appointed to head of the NPL Division of Computer Science, proposed a commercial national data network based on packet switching in Proposal for the Development of a National Communications Service for On-line Data Processing. After the proposal was not taken up nationally, during 1966 he headed a team which produced a design for a local network to serve the needs of NPL and prove the feasibility of packet switching. The design was the first to describe the concept of an ""Interface computer"", today known as a router. The next year, a written version of the proposal entitled NPL Data Network was presented by Roger Scantlebury at the Symposium on Operating Systems Principles. It described how computers (nodes) used to transmit signals (packets) would be connected by electrical links to re-transmit the signals between and to the nodes, and interface computers would be used to link node networks to so-called time-sharing computers and other users. The interface computers would transmit multiplex signals between networks, and nodes would switch transmissions while connected to electrical circuitry functioning at a rate of processing amounting to mega-bits. In Scantlebury's " https://en.wikipedia.org/wiki/Digital%20signal%20processor,"A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products. The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Overview Digital signal processing (DSP) algorithms typically require a large number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP applications have constraints on latency; that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred (or batch) processing is not viable. Most general-purpose microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints. A specialized DSP, however, will tend to provide a lower-cost solution, with better performance, lower latency, and no requirements for specialised cooling or large ba" https://en.wikipedia.org/wiki/Authenticated%20Identity%20Body,"Authenticated Identity Body or AIB is a method allowing parties in a network to share authenticated identity thereby increasing the integrity of their SIP communications. AIBs extend other authentication methods like S/MIME to provide a more specific mechanism to introduce integrity to SIP transmissions. Parties transmitting AIBs cryptographically sign a subset of SIP message headers, and such signatures assert the message originator's identity. To meet requirements of reference integrity (for example in defending against replay attacks) additional SIP message headers such as 'Date' and 'Contact' may be optionally included in the AIB. AIB is described and discussed in RFC 3893: ""For reasons of end-to-end privacy, it may also be desirable to encrypt AIBs [...]. While encryption of AIBs entails that only the holder of a specific key can decrypt the body, that single key could be distributed throughout a network of hosts that exist under common policies. The security of the AIB is therefore predicated on the secure distribution of the key. However, for some networks (in which there are federations of trusted hosts under a common policy), the widespread distribution of a decryption key could be appropriate. Some telephone networks, for example, might require this model. When an AIB is encrypted, the AIB should be encrypted before it is signed... Unless, of course, it is signed by Mrs. L in Rin, VA."" See also Computer networks Cryptographic software VoIP protocols VoIP software" https://en.wikipedia.org/wiki/List%20of%20variational%20topics,"This is a list of variational topics in from mathematics and physics. See calculus of variations for a general introduction. Action (physics) Averaged Lagrangian Brachistochrone curve Calculus of variations Catenoid Cycloid Dirichlet principle Euler–Lagrange equation cf. Action (physics) Fermat's principle Functional (mathematics) Functional derivative Functional integral Geodesic Isoperimetry Lagrangian Lagrangian mechanics Legendre transformation Luke's variational principle Minimal surface Morse theory Noether's theorem Path integral formulation Plateau's problem Prime geodesic Principle of least action Soap bubble Soap film Tautochrone curve Variations" https://en.wikipedia.org/wiki/Paprika%20oleoresin,"Paprika oleoresin (also known as paprika extract and oleoresin paprika) is an oil-soluble extract from the fruits of Capsicum annuum or Capsicum frutescens, and is primarily used as a colouring and/or flavouring in food products. It is composed of vegetable oil (often in the range of 97% to 98%), capsaicin, the main flavouring compound giving pungency in higher concentrations, and capsanthin and capsorubin, the main colouring compounds (among other carotenoids). It is much milder than capsicum oleoresin, often containing no capsaicin at all. Extraction is performed by percolation with a variety of solvents, primarily hexane, which are removed prior to use. Vegetable oil is then added to ensure a uniform color saturation. Uses Foods colored with paprika oleoresin include cheese, orange juice, spice mixtures, sauces, sweets, ketchup, soups, fish fingers, chips, pastries, fries, dressings, seasonings, jellies, bacon, ham, ribs, and among other foods even cod fillets. In poultry feed, it is used to deepen the colour of egg yolks. In the United States, paprika oleoresin is listed as a color additive “exempt from certification”. In Europe, paprika oleoresin (extract), and the compounds capsanthin and capsorubin are designated by E160c. Names and CAS nos" https://en.wikipedia.org/wiki/Programmable%20logic%20array,"A programmable logic array (PLA) is a kind of programmable logic device used to implement combinational logic circuits. The PLA has a set of programmable AND gate planes, which link to a set of programmable OR gate planes, which can then be conditionally complemented to produce an output. It has 2N AND gates for N input variables, and for M outputs from PLA, there should be M OR gates, each with programmable inputs from all of the AND gates. This layout allows for many logic functions to be synthesized in the sum of products canonical forms. PLAs differ from programmable array logic devices (PALs and GALs) in that both the AND and OR gate planes are programmable.[PAL has programmable AND gates but fixed OR gates] History In 1970, Texas Instruments developed a mask-programmable IC based on the IBM read-only associative memory or ROAM. This device, the TMS2000, was programmed by altering the metal layer during the production of the IC. The TMS2000 had up to 17 inputs and 18 outputs with 8 JK flip-flops for memory. TI coined the term Programmable Logic Array for this device. Implementation procedure Preparation in SOP (sum of products) form. Obtain the minimum SOP form to reduce the number of product terms to a minimum. Decide the input connection of the AND matrix for generating the required product term. Then decide the input connections of OR matrix to generate the sum terms. Decide the connections of invert matrix. Program the PLA. PLA block diagram: Advantages over read-only memory The desired outputs for each combination of inputs could be programmed into a read-only memory, with the inputs being driven by the address bus and the outputs being read out as data. However, that would require a separate memory location for every possible combination of inputs, including combinations that are never supposed to occur, and also duplicating data for ""don't care"" conditions (for example, logic like ""if input A is 1, then, as far as output X is concerned, w" https://en.wikipedia.org/wiki/Poisson%20wavelet,"In mathematics, in functional analysis, several different wavelets are known by the name Poisson wavelet. In one context, the term ""Poisson wavelet"" is used to denote a family of wavelets labeled by the set of positive integers, the members of which are associated with the Poisson probability distribution. These wavelets were first defined and studied by Karlene A. Kosanovich, Allan R. Moser and Michael J. Piovoso in 1995–96. In another context, the term refers to a certain wavelet which involves a form of the Poisson integral kernel. In still another context, the terminology is used to describe a family of complex wavelets indexed by positive integers which are connected with the derivatives of the Poisson integral kernel. Wavelets associated with Poisson probability distribution Definition For each positive integer n the Poisson wavelet is defined by To see the relation between the Poisson wavelet and the Poisson distribution let X be a discrete random variable having the Poisson distribution with parameter (mean) t and, for each non-negative integer n, let Prob(X = n) = pn(t). Then we have The Poisson wavelet is now given by Basic properties is the backward difference of the values of the Poisson distribution: The ""waviness"" of the members of this wavelet family follows from The Fourier transform of is given The admissibility constant associated with is Poisson wavelet is not an orthogonal family of wavelets. Poisson wavelet transform The Poisson wavelet family can be used to construct the family of Poisson wavelet transforms of functions defined the time domain. Since the Poisson wavelets satisfy the admissibility condition also, functions in the time domain can be reconstructed from their Poisson wavelet transforms using the formula for inverse continuous-time wavelet transforms. If f(t) is a function in the time domain its n-th Poisson wavelet transform is given by In the reverse direction, given the n-th Poisson wavelet transform of" https://en.wikipedia.org/wiki/Safe%20operating%20area,"For power semiconductor devices (such as BJT, MOSFET, thyristor or IGBT), the safe operating area (SOA) is defined as the voltage and current conditions over which the device can be expected to operate without self-damage. SOA is usually presented in transistor datasheets as a graph with VCE (collector-emitter voltage) on the abscissa and ICE (collector-emitter current) on the ordinate; the safe 'area' referring to the area under the curve. The SOA specification combines the various limitations of the device — maximum voltage, current, power, junction temperature, secondary breakdown — into one curve, allowing simplified design of protection circuitry. Often, in addition to the continuous rating, separate SOA curves are also plotted for short duration pulse conditions (1 ms pulse, 10 ms pulse, etc.). The safe operating area curve is a graphical representation of the power handling capability of the device under various conditions. The SOA curve takes into account the wire bond current carrying capability, transistor junction temperature, internal power dissipation and secondary breakdown limitations. Limits of the safe operating area Where both current and voltage are plotted on logarithmic scales, the borders of the SOA are straight lines: IC = ICmax — current limit VCE = VCEmax — voltage limit IC VCE = Pmax — dissipation limit, thermal breakdown IC VCEα = const — this is the limit given by the secondary breakdown (bipolar junction transistors only) SOA specifications are useful to the design engineer working on power circuits such as amplifiers and power supplies as they allow quick assessment of the limits of device performance, the design of appropriate protection circuitry, or selection of a more capable device. SOA curves are also important in the design of foldback circuits. Secondary breakdown For a device that makes use of the secondary breakdown effect see Avalanche transistor Secondary breakdown is a failure mode in bipolar power transistors. " https://en.wikipedia.org/wiki/Data-driven%20control%20system,"Data-driven control systems are a broad family of control systems, in which the identification of the process model and/or the design of the controller are based entirely on experimental data collected from the plant. In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by data-driven methods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The direct data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest. Overview The standard approach to control systems design is organized in two-steps: Model identification aims at estimating a nominal model of the system , where is the unit-delay operator (for discrete-time transfer functions representation) and is the vector of parameters of identified on a set of data. Then, validation consists in constructing the uncertainty set that contains the true system at a certain probability level. Controller design aims at finding a controller achieving closed-loop stability and meeting the required performance with . Typical objectives of system identification are to have as close as possible to , and to have as small as possible. However, from an identification for control perspective, what really matters is the performance achieved by the controller, not the intrinsic quality of the model. One way to deal with unce" https://en.wikipedia.org/wiki/Assimilation%20%28biology%29,"' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver. Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase. Examples of biological assimilation Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells. Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae. Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble " https://en.wikipedia.org/wiki/Lorentz%20scalar,"In a relativistic theory of physics, a Lorentz scalar is an expression, formed from items of the theory, which evaluates to a scalar, invariant under any Lorentz transformation. A Lorentz scalar may be generated from e.g., the scalar product of vectors, or from contracting tensors of the theory. While the components of vectors and tensors are in general altered under Lorentz transformations, Lorentz scalars remain unchanged. A Lorentz scalar is not always immediately seen to be an invariant scalar in the mathematical sense, but the resulting scalar value is invariant under any basis transformation applied to the vector space, on which the considered theory is based. A simple Lorentz scalar in Minkowski spacetime is the spacetime distance (""length"" of their difference) of two fixed events in spacetime. While the ""position""-4-vectors of the events change between different inertial frames, their spacetime distance remains invariant under the corresponding Lorentz transformation. Other examples of Lorentz scalars are the ""length"" of 4-velocities (see below), or the Ricci curvature in a point in spacetime from General relativity, which is a contraction of the Riemann curvature tensor there. Simple scalars in special relativity The length of a position vector In special relativity the location of a particle in 4-dimensional spacetime is given by where is the position in 3-dimensional space of the particle, is the velocity in 3-dimensional space and is the speed of light. The ""length"" of the vector is a Lorentz scalar and is given by where is the proper time as measured by a clock in the rest frame of the particle and the Minkowski metric is given by This is a time-like metric. Often the alternate signature of the Minkowski metric is used in which the signs of the ones are reversed. This is a space-like metric. In the Minkowski metric the space-like interval is defined as We use the space-like Minkowski metric in the rest of this article. The length of a" https://en.wikipedia.org/wiki/Modern%20Arabic%20mathematical%20notation,"Modern Arabic mathematical notation is a mathematical notation based on the Arabic script, used especially at pre-university levels of education. Its form is mostly derived from Western notation, but has some notable features that set it apart from its Western counterpart. The most remarkable of those features is the fact that it is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of the Greek and Latin alphabet letters for symbols with Arabic letters and the use of Arabic names for functions and relations. Features It is written from right to left following the normal direction of the Arabic script. Other differences include the replacement of the Latin alphabet letters for symbols with Arabic letters and the use of Arabic names for functions and relations. The notation exhibits one of the very few remaining vestiges of non-dotted Arabic scripts, as dots over and under letters (i'jam) are usually omitted. Letter cursivity (connectedness) of Arabic is also taken advantage of, in a few cases, to define variables using more than one letter. The most widespread example of this kind of usage is the canonical symbol for the radius of a circle (), which is written using the two letters nūn and qāf. When variable names are juxtaposed (as when expressing multiplication) they are written non-cursively. Variations Notation differs slightly from region to another. In tertiary education, most regions use the Western notation. The notation mainly differs in numeral system used, and in mathematical symbol used. Numeral systems There are three numeral systems used in right to left mathematical notation. ""Western Arabic numerals"" (sometimes called European) are used in western Arabic regions (e.g. Morocco) ""Eastern Arabic numerals"" are used in middle and eastern Arabic regions (e.g. Egypt and Syria) ""Eastern Arabic-Indic numerals"" are used in Persian and Urdu speaking regions (e.g. Iran, Pakistan, India) " https://en.wikipedia.org/wiki/Functional%20testing%20%28manufacturing%29,"In manufacturing, functional testing (FCT) is performed during the last phase of the production line. This is often referred to as a final quality control test, which is done to ensure that specifications are carried out by FCTs. The process of FCTs is entailed by the emulation or simulation of the environment in which a product is expected to operate. This is done so to check, and correct any issues with functionality. The environment involved with FCTs consists of any device that communicates with an DUT, the power supply of said DUT, and any loads needed to make the DUT function correctly. Functional tests are performed in an automatic fashion by production line operators using test software. In order for this to be completed, the software will communicate with any external programmable instruments such as I/O boards, digital multimeters, and communication ports. In conjunction with the test fixture, the software that interfaces with the DUT is what makes it possible for a FCT to be performed. Typical vendors Agilent Technologies Acculogic Keysight Circuit Check National Instruments Teradyne Flex (company) 6TL engineering See also Acceptance testing" https://en.wikipedia.org/wiki/Gold%E2%80%93aluminium%20intermetallic,"A gold–aluminium intermetallic is an intermetallic compound of gold and aluminium that occurs at contacts between the two metals. These intermetallics have different properties from the individual metals, which can cause problems in wire bonding in microelectronics. The main compounds formed are Au5Al2 (white plague) and AuAl2 (purple plague), which both form at high temperatures. White plague is the name of the compound Au5Al2 as well as the problem it causes. It has low electrical conductivity, so its formation at the joint leads to an increase of electrical resistance which can lead to total failure. Purple plague (sometimes known as purple death or Roberts-Austen's purple gold) is a brittle, bright-purple compound, AuAl2, or about 78.5% Au and 21.5% Al by mass. AuAl2 is the most stable thermally of the Au–Al intermetallic compounds, with a melting point of 1060°C (see phase diagram), similar to that of pure gold. The process of the growth of the intermetallic layers causes reduction in volume, and hence creates cavities in the metal near the interface between gold and aluminium. Other gold–aluminium intermetallics can cause problems as well. Below 624°C, purple plague is replaced by Au2Al, a tan-colored substance. It is a poor conductor and can cause electrical failure of the joint that can lead to mechanical failure. At lower temperatures, about 400–450°C, an interdiffusion process takes place at the junction. This leads to formation of layers of several intermetallic compounds with different compositions, from gold-rich to aluminium-rich, with different growth rates. Cavities form as the denser, faster-growing layers consume the slower-growing ones. This process, known as Kirkendall voiding, leads to both increased electrical resistance and mechanical weakening of the wire bond. When the voids are collected along the diffusion front, a process aided by contaminants present in the lattice, it is known as Horsting voiding, a process similar to and often con" https://en.wikipedia.org/wiki/Scientific%20notation,"Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators it is usually known as ""SCI"" display mode. In scientific notation, nonzero numbers are written in the form or m times ten raised to the power of n, where n is an integer, and the coefficient m is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer n is called the exponent and the real number m is called the significand or mantissa. The term ""mantissa"" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes m, as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand m is at least 1 but less than 10. Decimal floating point is a computer arithmetic system closely related to scientific notation. History Normalized notation Any real number can be written in the form in many ways: for example, 350 can be written as or or . In normalized scientific notation (called ""standard form"" in the United Kingdom), the exponent n is chosen so that the absolute value of m remains at least one but less than ten (1 ≤ |m| < 10). Thus 350 is written as . This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It i" https://en.wikipedia.org/wiki/Stepping%20level,"In integrated circuits, the stepping level or revision level is a version number that refers to the introduction or revision of one or more photolithographic photomasks within the set of photomasks that is used to pattern an integrated circuit. The term originated from the name of the equipment (""steppers"") that exposes the photoresist to light. Integrated circuits have two primary classes of mask sets: firstly, ""base"" layers that are used to build the structures, such as transistors, that comprise circuit logic and, secondly, ""metal"" layers that connect the circuit logic. Typically, when an integrated circuit manufacturer such as Intel or AMD produces a new stepping (i.e. a revision to the masks), it is because it has found bugs in the logic, has made improvements to the design that permit faster processing, has found a way to increase yield or improve the ""bin splits"" (i.e. create faster transistors and thus faster CPUs), has improved maneuverability to more easily identify marginal circuits, or has reduced the circuit testing time, which can in turn reduce the cost of testing. Many integrated circuits allow interrogation to reveal information about their features, including stepping level. For example, executing CPUID instruction with the EAX register set to '1' on x86 CPUs will result in values being placed in other registers that show the CPU's stepping level. Stepping identifiers commonly comprise a letter followed by a number, for example B2. Usually, the letter indicates the revision level of a CPU's base layers and the number indicates the revision level of the metal layers. A change of letter indicates a change to both the base layer mask revision and metal layers whereas a change in the number indicates a change in the metal layer mask revision only. This is analogous to the major/minor revision numbers in software versioning. Base layer revision changes are time consuming and more expensive for the manufacturer, but some fixes are difficult or imposs" https://en.wikipedia.org/wiki/Relative%20rate%20test,"The relative rate test is a genetic comparative test between two ingroups (somewhat closely related species) and an outgroup or “reference species” to compare mutation and evolutionary rates between the species. Each ingroup species is compared independently to the outgroup to determine how closely related the two species are without knowing the exact time of divergence from their closest common ancestor. If more change has occurred on one lineage relative to another lineage since their shared common ancestor, then the outgroup species will be more different from the faster-evolving lineage's species than it is from the slower-evolving lineage's species. This is because the faster-evolving lineage will, by definition, have accumulated more differences since the common ancestor than the slower-evolving lineage. This method can be applied to averaged data (i.e., groups of molecules), or individual molecules. It is possible for individual molecules to show evidence of approximately constant rates of change in different lineages even while the rates differ between different molecules. The relative rate test is a direct internal test of the molecular clock, for a given molecule and a given set of species, and shows that the molecular clock does not need to be (and should never be) assumed: It can be directly assessed from the data itself. Note that the logic can also be applied to any kind of data for which a distance measure can be defined (e.g., even morphological features). Uses The initial use of this method was to assess whether or not there was evidence for different rates of molecular change in different lineages for particular molecules. If there was no evidence of significantly different rates, this would be direct evidence of a molecular clock, and (only) then would allow for a phylogeny to be constructed based on relative branch points (absolute dates for branch points in the phylogeny would require further calibration with the best-attested fossil evidence" https://en.wikipedia.org/wiki/Almost%20everywhere,"In measure theory (a branch of mathematical analysis), a property holds almost everywhere if, in a technical sense, the set for which the property holds takes up nearly all possibilities. The notion of ""almost everywhere"" is a companion notion to the concept of measure zero, and is analogous to the notion of almost surely in probability theory. More specifically, a property holds almost everywhere if it holds for all elements in a set except a subset of measure zero, or equivalently, if the set of elements for which the property holds is conull. In cases where the measure is not complete, it is sufficient that the set be contained within a set of measure zero. When discussing sets of real numbers, the Lebesgue measure is usually assumed unless otherwise stated. The term almost everywhere is abbreviated a.e.; in older literature p.p. is used, to stand for the equivalent French language phrase presque partout. A set with full measure is one whose complement is of measure zero. In probability theory, the terms almost surely, almost certain and almost always refer to events with probability 1 not necessarily including all of the outcomes. These are exactly the sets of full measure in a probability space. Occasionally, instead of saying that a property holds almost everywhere, it is said that the property holds for almost all elements (though the term almost all can also have other meanings). Definition If is a measure space, a property is said to hold almost everywhere in if there exists a set with , and all have the property . Another common way of expressing the same thing is to say that ""almost every point satisfies "", or that ""for almost every , holds"". It is not required that the set has measure 0; it may not belong to . By the above definition, it is sufficient that be contained in some set that is measurable and has measure 0. Properties If property holds almost everywhere and implies property , then property holds almost everywhere. This " https://en.wikipedia.org/wiki/Steinhaus%E2%80%93Moser%20notation,"In mathematics, Steinhaus–Moser notation is a notation for expressing certain large numbers. It is an extension (devised by Leo Moser) of Hugo Steinhaus's polygon notation. Definitions a number in a triangle means nn. a number in a square is equivalent to ""the number inside triangles, which are all nested."" a number in a pentagon is equivalent with ""the number inside squares, which are all nested."" etc.: written in an ()-sided polygon is equivalent with ""the number inside nested -sided polygons"". In a series of nested polygons, they are associated inward. The number inside two triangles is equivalent to nn inside one triangle, which is equivalent to nn raised to the power of nn. Steinhaus defined only the triangle, the square, and the circle , which is equivalent to the pentagon defined above. Special values Steinhaus defined: mega is the number equivalent to 2 in a circle: megiston is the number equivalent to 10 in a circle: ⑩ Moser's number is the number represented by ""2 in a megagon"". Megagon is here the name of a polygon with ""mega"" sides (not to be confused with the polygon with one million sides). Alternative notations: use the functions square(x) and triangle(x) let be the number represented by the number in nested -sided polygons; then the rules are: and mega =  megiston =  moser = Mega A mega, ②, is already a very large number, since ② = square(square(2)) = square(triangle(triangle(2))) = square(triangle(22)) = square(triangle(4)) = square(44) = square(256) = triangle(triangle(triangle(...triangle(256)...))) [256 triangles] = triangle(triangle(triangle(...triangle(256256)...))) [255 triangles] ~ triangle(triangle(triangle(...triangle(3.2317 × 10616)...))) [255 triangles] ... Using the other notation: mega = M(2,1,5) = M(256,256,3) With the function we have mega = where the superscript denotes a functional power, not a numerical power. We have (note the convention that powers are evaluated from right to left): M(25" https://en.wikipedia.org/wiki/UniPro,"UniPro (or Unified Protocol) is a high-speed interface technology for interconnecting integrated circuits in mobile and mobile-influenced electronics. The various versions of the UniPro protocol are created within the MIPI Alliance (Mobile Industry Processor Interface Alliance), an organization that defines specifications targeting mobile and mobile-influenced applications. The UniPro technology and associated physical layers aim to provide high-speed data communication (gigabits/second), low-power operation (low swing signaling, standby modes), low pin count (serial signaling, multiplexing), small silicon area (small packet sizes), data reliability (differential signaling, error recovery) and robustness (proven networking concepts, including congestion management). UniPro version 1.6 concentrates on enabling high-speed point to point communication between chips in mobile electronics. UniPro has provisions for supporting networks consisting of up to 128 UniPro devices (integrated circuit, modules, etc.). Network features are planned in future UniPro releases. In such a networked environment, pairs of UniPro devices are interconnected via so-called links while data packets are routed toward their destination by UniPro switches. These switches are analogous to the routers used in wired LAN based on gigabit Ethernet. But unlike a LAN, the UniPro technology was designed to connect chips within a mobile terminal, rather than to connect computers within a building. History and aims The initiative to develop the UniPro protocol came forth out of a pair of research projects at respectively Nokia Research Center and Philips Research. Both teams independently arrived at the conclusion that the complexity of mobile systems could be reduced by splitting the system design into well-defined functional modules interconnected by a network. The key assumptions were thus that the networking paradigm gave modules well-structured, layered interfaces and that it was time to improve " https://en.wikipedia.org/wiki/Jungle%20chip,"A jungle chip, or jungle IC, is an integrated circuit (IC or ""chip"") found in most analog televisions of the 1990s. It takes a composite video signal from the radio frequency receiver electronics and turns it into separate RGB outputs that can be sent to the cathode ray tube to produce a display. This task had previously required separate analog circuits. Advanced versions generally had a second set of inputs in RGB format that were used to overlay on-screen display imagery. These would be connected to a microcontroller that would handle operations like tuning, sleep mode and running the remote control. A separate input called ""blanking"" switched the jungle outputs between the two inputs on the fly. This was normally triggered at a fixed location on the screen, creating rectangular areas with the digital data overlaying the television signal. This was used for on-screen channel displays, closed captioning support, and similar duties. The internal RGB inputs have led to such televisions having a revival in the retrocomputing market. By running connectors from the RGB pins on the jungle chip to connectors added by the user, typically RCA jacks on the back of the television case, and then turning on the blanking switch permanently, the system is converted to an RGB monitor. Since early computers output signals with television timings, NTSC or PAL, using a jungle chip television avoids the need to provide separate timing signals. This contrasts with multisync monitors or similar designs that do not have any ""built-in"" timing and have separate inputs for these signals. Examples of jungle chips include the Motorola MC65585, Phillips RDA6361 and Sony CXA1870." https://en.wikipedia.org/wiki/Flotation%20of%20flexible%20objects,"Flotation of flexible objects is a phenomenon in which the bending of a flexible material allows an object to displace a greater amount of fluid than if it were completely rigid. This ability to displace more fluid translates directly into an ability to support greater loads, giving the flexible structure an advantage over a similarly rigid one. Inspiration to study the effects of elasticity are taken from nature, where plants, such as black pepper, and animals living at the water surface have evolved to take advantage of the load-bearing benefits elasticity imparts. History In his work ""On Floating Bodies"", Archimedes famously stated: While this basic idea carried enormous weight and has come to form the basis of understanding why objects float, it is best applied for objects with a characteristic length scale greater than the capillary length. What Archimedes had failed to predict was the influence of surface tension and its impact at small length scales. More recent works, such as that of Keller, have extended these principles by considering the role of surface tension forces on partially submerged bodies. Keller, for instance, demonstrated analytically that the weight of water displaced by a meniscus is equal to the vertical component of the surface tension force. Nonetheless, the role of flexibility and its impact on an object's load-bearing potential is one that did receive attention until the mid-2000s and onward. In an initial study, Vella studied the load supported by a raft composed of thin, rigid strips. Specifically, he compared the case of floating individual strips to floating an aggregation of strips, wherein the aggregate structure causes portions of the meniscus (and hence, resulting surface tension force) to disappear. By extending his analysis to consider a similar system composed of thin strips of some finite bending stiffness, he found that this later case in fact was able support a greater load. A well known work in the area of surface t" https://en.wikipedia.org/wiki/Innumeracy%20%28book%29,"Innumeracy: Mathematical Illiteracy and its Consequences is a 1988 book by mathematician John Allen Paulos about innumeracy (deficiency of numeracy) as the mathematical equivalent of illiteracy: incompetence with numbers rather than words. Innumeracy is a problem with many otherwise educated and knowledgeable people. While many people would be ashamed to admit they are illiterate, there is very little shame in admitting innumeracy by saying things like ""I'm a people person, not a numbers person"", or ""I always hated math"", but Paulos challenges whether that widespread cultural excusing of innumeracy is truly worthy of acceptability. Paulos speaks mainly of the common misconceptions about, and inability to deal comfortably with, numbers, and the logic and meaning that they represent. He looks at real-world examples in stock scams, psychics, astrology, sports records, elections, sex discrimination, UFOs, insurance and law, lotteries, and drug testing. Paulos discusses innumeracy with quirky anecdotes, scenarios, and facts, encouraging readers in the end to look at their world in a more quantitative way. The book sheds light on the link between innumeracy and pseudoscience. For example, the fortune telling psychic's few correct and general observations are remembered over the many incorrect guesses. He also stresses the problem between the actual number of occurrences of various risks and popular perceptions of those risks happening. The problems of innumeracy come at a great cost to society. Topics include probability and coincidence, innumeracy in pseudoscience, statistics, and trade-offs in society. For example, the danger of getting killed in a car accident is much greater than terrorism and this danger should be reflected in how we allocate our limited resources. Background John Allen Paulos (born July 4, 1945) is an American professor of mathematics at Temple University in Pennsylvania. He is a writer and speaker on mathematics and the importance of mathematic" https://en.wikipedia.org/wiki/Reciprocal%20Fibonacci%20constant,"The reciprocal Fibonacci constant, or ψ, is defined as the sum of the reciprocals of the Fibonacci numbers: The ratio of successive terms in this sum tends to the reciprocal of the golden ratio. Since this is less than 1, the ratio test shows that the sum converges. The value of ψ is known to be approximately . Gosper describes an algorithm for fast numerical approximation of its value. The reciprocal Fibonacci series itself provides O(k) digits of accuracy for k terms of expansion, while Gosper's accelerated series provides O(k 2) digits. ψ is known to be irrational; this property was conjectured by Paul Erdős, Ronald Graham, and Leonard Carlitz, and proved in 1989 by Richard André-Jeannin. The continued fraction representation of the constant is: . See also List of sums of reciprocals" https://en.wikipedia.org/wiki/List%20of%20mathematical%20identities,"This article lists mathematical identities, that is, identically true relations holding in mathematics. Bézout's identity (despite its usual name, it is not, properly speaking, an identity) Binomial inverse theorem Binomial identity Brahmagupta–Fibonacci two-square identity Candido's identity Cassini and Catalan identities Degen's eight-square identity Difference of two squares Euler's four-square identity Euler's identity Fibonacci's identity see Brahmagupta–Fibonacci identity or Cassini and Catalan identities Heine's identity Hermite's identity Lagrange's identity Lagrange's trigonometric identities MacWilliams identity Matrix determinant lemma Newton's identity Parseval's identity Pfister's sixteen-square identity Sherman–Morrison formula Sophie Germain identity Sun's curious identity Sylvester's determinant identity Vandermonde's identity Woodbury matrix identity Identities for classes of functions Exterior calculus identities Fibonacci identities: Combinatorial Fibonacci identities and Other Fibonacci identities Hypergeometric function identities List of integrals of logarithmic functions List of topics related to List of trigonometric identities Inverse trigonometric functions Logarithmic identities Summation identities Vector calculus identities See also External links A Collection of Algebraic Identities Matrix Identities Identities" https://en.wikipedia.org/wiki/Language-based%20security,"In computer science, language-based security (LBS) is a set of techniques that may be used to strengthen the security of applications on a high level by using the properties of programming languages. LBS is considered to enforce computer security on an application-level, making it possible to prevent vulnerabilities which traditional operating system security is unable to handle. Software applications are typically specified and implemented in certain programming languages, and in order to protect against attacks, flaws and bugs an application's source code might be vulnerable to, there is a need for application-level security; security evaluating the applications behavior with respect to the programming language. This area is generally known as language-based security. Motivation The use of large software systems, such as SCADA, is taking place all around the world and computer systems constitute the core of many infrastructures. The society relies greatly on infrastructure such as water, energy, communication and transportation, which again all rely on fully functionally working computer systems. There are several well known examples of when critical systems fail due to bugs or errors in software, such as when shortage of computer memory caused LAX computers to crash and hundreds of flights to be delayed (April 30, 2014). Traditionally, the mechanisms used to control the correct behavior of software are implemented at the operating system level. The operating system handles several possible security violations such as memory access violations, stack overflow violations, access control violations, and many others. This is a crucial part of security in computer systems, however by securing the behavior of software on a more specific level, even stronger security can be achieved. Since a lot of properties and behavior of the software is lost in compilation, it is significantly more difficult to detect vulnerabilities in machine code. By evaluating the source code" https://en.wikipedia.org/wiki/QuRiNet,"The Quail Ridge Wireless Mesh Network project is an effort to provide a wireless communications infrastructure to the Quail Ridge Reserve, a wildlife reserve in California in the United States. The network is intended to benefit on-site ecological research and provide a wireless mesh network tested for development and analysis. The project is a collaboration between the University of California Natural Reserve System and the Networks Lab at the Department of Computer Science, UC Davis. Project The large-scale wireless mesh network would consist of various sensor networks gathering temperature, visual, and acoustic data at certain locations. This information would then be stored at the field station or relayed further over Ethernet. The backbone nodes would also serve as access points enabling wireless access at their locations. The Quail Ridge Reserve would also be used for further research into wireless mesh networks. External links qurinet.cs.ucdavis.edu spirit.cs.ucdavis.edu nrs.ucdavis.edu/quail.html nrs.ucop.edu Computer networking" https://en.wikipedia.org/wiki/Level%20%28logarithmic%20quantity%29,"In science and engineering, a power level and a field level (also called a root-power level) are logarithmic magnitudes of certain quantities referenced to a standard reference value of the same type. A power level is a logarithmic quantity used to measure power, power density or sometimes energy, with commonly used unit decibel (dB). A field level (or root-power level) is a logarithmic quantity used to measure quantities of which the square is typically proportional to power (for instance, the square of voltage is proportional to power by the inverse of the conductor's resistance), etc., with commonly used units neper (Np) or decibel (dB). The type of level and choice of units indicate the scaling of the logarithm of the ratio between the quantity and its reference value, though a logarithm may be considered to be a dimensionless quantity. The reference values for each type of quantity are often specified by international standards. Power and field levels are used in electronic engineering, telecommunications, acoustics and related disciplines. Power levels are used for signal power, noise power, sound power, sound exposure, etc. Field levels are used for voltage, current, sound pressure. Power level Level of a power quantity, denoted LP, is defined by where P is the power quantity; P0 is the reference value of P. Field (or root-power) level The level of a root-power quantity (also known as a field quantity), denoted LF, is defined by where F is the root-power quantity, proportional to the square root of power quantity; F0 is the reference value of F. If the power quantity P is proportional to F2, and if the reference value of the power quantity, P0, is in the same proportion to F02, the levels LF and LP are equal. The neper, bel, and decibel (one tenth of a bel) are units of level that are often applied to such quantities as power, intensity, or gain. The neper, bel, and decibel are related by ; . Standards Level and its units are define" https://en.wikipedia.org/wiki/List%20of%20mesons,"This list is of all known and predicted scalar, pseudoscalar and vector mesons. See list of particles for a more detailed list of particles found in particle physics. This article contains a list of mesons, unstable subatomic particles composed of one quark and one antiquark. They are part of the hadron particle family—particles made of quarks. The other members of the hadron family are the baryons—subatomic particles composed of three quarks. The main difference between mesons and baryons is that mesons have integer spin (thus are bosons) while baryons are fermions (half-integer spin). Because mesons are bosons, the Pauli exclusion principle does not apply to them. Because of this, they can act as force mediating particles on short distances, and thus play a part in processes such as the nuclear interaction. Since mesons are composed of quarks, they participate in both the weak and strong interactions. Mesons with net electric charge also participate in the electromagnetic interaction. They are classified according to their quark content, total angular momentum, parity, and various other properties such as C-parity and G-parity. While no meson is stable, those of lower mass are nonetheless more stable than the most massive mesons, and are easier to observe and study in particle accelerators or in cosmic ray experiments. They are also typically less massive than baryons, meaning that they are more easily produced in experiments, and will exhibit higher-energy phenomena sooner than baryons would. For example, the charm quark was first seen in the J/Psi meson () in 1974, and the bottom quark in the upsilon meson () in 1977. The top quark (the last and heaviest quark to be discovered to date) was first observed at Fermilab in 1995. Each meson has a corresponding antiparticle (antimeson) where quarks are replaced by their corresponding antiquarks and vice versa. For example, a positive pion () is made of one up quark and one down antiquark; and its corresponding anti" https://en.wikipedia.org/wiki/Higher-order%20sinusoidal%20input%20describing%20function," Definition The higher-order sinusoidal input describing functions (HOSIDF) were first introduced by dr. ir. P.W.J.M. Nuij. The HOSIDFs are an extension of the sinusoidal input describing function which describe the response (gain and phase) of a system at harmonics of the base frequency of a sinusoidal input signal. The HOSIDFs bear an intuitive resemblance to the classical frequency response function and define the periodic output of a stable, causal, time invariant nonlinear system to a sinusoidal input signal: This output is denoted by and consists of harmonics of the input frequency: Defining the single sided spectra of the input and output as and , such that yields the definition of the k-th order HOSIDF: Advantages and applications The application and analysis of the HOSIDFs is advantageous both when a nonlinear model is already identified and when no model is known yet. In the latter case the HOSIDFs require little model assumptions and can easily be identified while requiring no advanced mathematical tools. Moreover, even when a model is already identified, the analysis of the HOSIDFs often yields significant advantages over the use of the identified nonlinear model. First of all, the HOSIDFs are intuitive in their identification and interpretation while other nonlinear model structures often yield limited direct information about the behavior of the system in practice. Furthermore, the HOSIDFs provide a natural extension of the widely used sinusoidal describing functions in case nonlinearities cannot be neglected. In practice the HOSIDFs have two distinct applications: Due to their ease of identification, HOSIDFs provide a tool to provide on-site testing during system design. Finally, the application of HOSIDFs to (nonlinear) controller design for nonlinear systems is shown to yield significant advantages over conventional time domain based tuning. Electrical engineering Control theory Signal processing" https://en.wikipedia.org/wiki/Glossary%20of%20industrial%20automation,"This glossary of industrial automation is a list of definitions of terms and illustrations related specifically to the field of industrial automation. For a more general view on electric engineering, see Glossary of electrical and electronics engineering. For terms related to engineering in general, see Glossary of engineering. A See also Glossary of engineering Glossary of power electronics Glossary of civil engineering Glossary of mechanical engineering Glossary of structural engineering Notes" https://en.wikipedia.org/wiki/List%20of%20common%20physics%20notations,"This is a list of common physical constants and variables, and their notations. Note that bold text indicates that the quantity is a vector. Latin characters Greek characters Other characters See also List of letters used in mathematics and science Glossary of mathematical symbols List of mathematical uses of Latin letters Greek letters used in mathematics, science, and engineering Physical constant Physical quantity International System of Units ISO 31" https://en.wikipedia.org/wiki/Cisco%20Certified%20Entry%20Networking%20Technician,"The Cisco Certified Entry Networking Technician (CCENT) certification was the first stage of Cisco's certification system. The certification was retired on 24 February 2020. The CCENT certification was an interim step to Associate level or directly with CCNA and CCDA certifications. While the CCENT covered basic networking knowledge; it did not get involved with the more intricate technical aspects of the Cisco routing and switching and network design. The certification validated the skills essential for entry-level network support positions. CCENT qualified individuals have the knowledge and skill to install, manage, maintain and troubleshoot a small enterprise branch network, including network security. The CCENT curriculum covered networking fundamentals, WAN technologies, basic security, routing and switching fundamentals, and configuring simple networks. The applicable training was the Cisco ICND1 (""Interconnecting Cisco Network Devices, Part 1"") and the exam was (""100-105"" ICND1), costing $165 retail. The certification was valid for 3 years. The CCENT qualifying exam, ICND1 was retired on 24 February 2020. Existing CCENT holders will continue to have active and valid CCENT certification 3 years from issue date. See also CCNA Cisco CCDA certification" https://en.wikipedia.org/wiki/Biological%20imaging,"Biological imaging may refer to any imaging technique used in biology. Typical examples include: Bioluminescence imaging, a technique for studying laboratory animals using luminescent protein Calcium imaging, determining the calcium status of a tissue using fluorescent light Diffuse optical imaging, using near-infrared light to generate images of the body Diffusion-weighted imaging, a type of MRI that uses water diffusion Fluorescence lifetime imaging, using the decay rate of a fluorescent sample Gallium imaging, a nuclear medicine method for the detection of infections and cancers Imaging agent, a chemical designed to allow clinicians to determine whether a mass is benign or malignant Imaging studies, which includes many medical imaging techniques Magnetic resonance imaging (MRI), a non-invasive method to render images of living tissues Magneto-acousto-electrical tomography (MAET), is an imaging modality to image the electrical conductivity of biological tissues Medical imaging, creating images of the human body or parts of it, to diagnose or examine disease Microscopy, creating images of objects or features too small to be detectable by the naked human eye Molecular imaging, used to study molecular pathways inside organisms Non-contact thermography, is the field of thermography that derives diagnostic indications from infrared images of the human body. Nuclear medicine, uses administered radioactive substances to create images of internal organs and their function. Optical imaging, using light as an investigational tool for biological research and medical diagnosis Optoacoustic imaging, using the photothermal effect, for the accuracy of spectroscopy with the depth resolution of ultrasound Photoacoustic Imaging, a technique to detect vascular disease and cancer using non-ionizing laser pulses Ultrasound imaging, using very high frequency sound to visualize muscles and internal organs" https://en.wikipedia.org/wiki/Feller%27s%20coin-tossing%20constants,"Feller's coin-tossing constants are a set of numerical constants which describe asymptotic probabilities that in n independent tosses of a fair coin, no run of k consecutive heads (or, equally, tails) appears. William Feller showed that if this probability is written as p(n,k) then where αk is the smallest positive real root of and Values of the constants For the constants are related to the golden ratio, , and Fibonacci numbers; the constants are and . The exact probability p(n,2) can be calculated either by using Fibonacci numbers, p(n,2) =  or by solving a direct recurrence relation leading to the same result. For higher values of , the constants are related to generalizations of Fibonacci numbers such as the tribonacci and tetranacci numbers. The corresponding exact probabilities can be calculated as p(n,k) = . Example If we toss a fair coin ten times then the exact probability that no pair of heads come up in succession (i.e. n = 10 and k = 2) is p(10,2) =  = 0.140625. The approximation gives 1.44721356...×1.23606797...−11 = 0.1406263..." https://en.wikipedia.org/wiki/Starch%20production,"Starch production is an isolation of starch from plant sources. It takes place in starch plants. Starch industry is a part of food processing which is using starch as a starting material for production of starch derivatives, hydrolysates, dextrins. At first, the raw material for the preparation of the starch was wheat. Currently main starch sources are: maize (in America, China and Europe) – 70%, potatoes (in Europe) – 12%, wheat - 8% (in Europe and Australia), tapioca - 9% (South East Asia and South America), rice, sorghum and other - 1%. Potato starch production The production of potato starch comprises the steps such as delivery and unloading potatoes, cleaning, rasping of tubers, potato juice separation, starch extraction, starch milk refination, dewatering of refined starch milk and starch drying. The potato starch production supply chain varies significantly by region. For example, potato starch in Europe is produced from potatoes grown specifically for this purpose. However, in the US, potatoes are not grown for starch production and manufacturers must source raw material from food processor waste streams. The characteristics of these waste streams can vary significantly and require further processing by the US potato starch manufacturer to ensure the end-product functionality and specifications are acceptable. Delivery and unloading potatoes Potatoes are delivered to the starch plants via road or rail transport. Unloading of potatoes could be done in two ways: dry - using elevators and tippers, wet - using strong jet of water. Cleaning Coarsely cleaning of potatoes takes place during the transport of potatoes to the scrubber by channel. In addition, before the scrubber, straw and stones separators are installed. The main cleaning is conducted in scrubber (different kinds of high specialized machines are used). The remaining stones, sludge and light wastes are removed at this step. Water used for washing is then purified and recycled back into th" https://en.wikipedia.org/wiki/List%20of%20manifolds,"This is a list of particular manifolds, by Wikipedia page. See also list of geometric topology topics. For categorical listings see :Category:Manifolds and its subcategories. Generic families of manifolds Euclidean space, Rn n-sphere, Sn n-torus, Tn Real projective space, RPn Complex projective space, CPn Quaternionic projective space, HPn Flag manifold Grassmann manifold Stiefel manifold Lie groups provide several interesting families. See Table of Lie groups for examples. See also: List of simple Lie groups and List of Lie group topics. Manifolds of a specific dimension 1-manifolds Circle, S1 Long line Real line, R Real projective line, RP1 ≅ S1 2-manifolds Cylinder, S1 × R Klein bottle, RP2 # RP2 Klein quartic (a genus 3 surface) Möbius strip Real projective plane, RP2 Sphere, S2 Surface of genus g Torus Double torus 3-manifolds 3-sphere, S3 3-torus, T3 Poincaré homology sphere SO(3) ≅ RP3 Solid Klein bottle Solid torus Whitehead manifold Meyerhoff manifold Weeks manifold For more examples see 3-manifold. 4-manifolds Complex projective plane Del Pezzo surface E8 manifold Enriques surface Exotic R4 Hirzebruch surface K3 surface For more examples see 4-manifold. Special types of manifolds Manifolds related to spheres Brieskorn manifold Exotic sphere Homology sphere Homotopy sphere Lens space Spherical 3-manifold Special classes of Riemannian manifolds Einstein manifold Ricci-flat manifold G2 manifold Kähler manifold Calabi–Yau manifold Hyperkähler manifold Quaternionic Kähler manifold Riemannian symmetric space Spin(7) manifold Categories of manifolds Manifolds definable by a particular choice of atlas Affine manifold Analytic manifold Complex manifold Differentiable (smooth) manifold Piecewise linear manifold Lipschitz manifold Topological manifold Manifolds with additional structure Almost complex manifold Almost symplectic manifold Calibrated manifold Complex manifold Contac" https://en.wikipedia.org/wiki/Transport%20theorem,"The transport theorem (or transport equation, rate of change transport theorem or basic kinematic equation) is a vector equation that relates the time derivative of a Euclidean vector as evaluated in a non-rotating coordinate system to its time derivative in a rotating reference frame. It has important applications in classical mechanics and analytical dynamics and diverse fields of engineering. A Euclidean vector represents a certain magnitude and direction in space that is independent of the coordinate system in which it is measured. However, when taking a time derivative of such a vector one actually takes the difference between two vectors measured at two different times t and t+dt. In a rotating coordinate system, the coordinate axes can have different directions at these two times, such that even a constant vector can have a non-zero time derivative. As a consequence, the time derivative of a vector measured in a rotating coordinate system can be different from the time derivative of the same vector in a non-rotating reference system. For example, the velocity vector of an airplane as evaluated using a coordinate system that is fixed to the earth (a rotating reference system) is different from its velocity as evaluated using a coordinate system that is fixed in space. The transport theorem provides a way to relate time derivatives of vectors between a rotating and non-rotating coordinate system, it is derived and explained in more detail in rotating reference frame and can be written as: Here f is the vector of which the time derivative is evaluated in both the non-rotating, and rotating coordinate system. The subscript r designates its time derivative in the rotating coordinate system and the vector Ω is the angular velocity of the rotating coordinate system. The Transport Theorem is particularly useful for relating velocities and acceleration vectors between rotating and non-rotating coordinate systems. Reference states: ""Despite of its importance in cla" https://en.wikipedia.org/wiki/Fundamental%20theorem%20of%20topos%20theory,"In mathematics, The fundamental theorem of topos theory states that the slice of a topos over any one of its objects is itself a topos. Moreover, if there is a morphism in then there is a functor which preserves exponentials and the subobject classifier. The pullback functor For any morphism f in there is an associated ""pullback functor"" which is key in the proof of the theorem. For any other morphism g in which shares the same codomain as f, their product is the diagonal of their pullback square, and the morphism which goes from the domain of to the domain of f is opposite to g in the pullback square, so it is the pullback of g along f, which can be denoted as . Note that a topos is isomorphic to the slice over its own terminal object, i.e. , so for any object A in there is a morphism and thereby a pullback functor , which is why any slice is also a topos. For a given slice let denote an object of it, where X is an object of the base category. Then is a functor which maps: . Now apply to . This yields so this is how the pullback functor maps objects of to . Furthermore, note that any element C of the base topos is isomorphic to , therefore if then and so that is indeed a functor from the base topos to its slice . Logical interpretation Consider a pair of ground formulas and whose extensions and (where the underscore here denotes the null context) are objects of the base topos. Then implies if and only if there is a monic from to . If these are the case then, by theorem, the formula is true in the slice , because the terminal object of the slice factors through its extension . In logical terms, this could be expressed as so that slicing by the extension of would correspond to assuming as a hypothesis. Then the theorem would say that making a logical assumption does not change the rules of topos logic. See also Timeline of category theory and related mathematics Deduction Theorem" https://en.wikipedia.org/wiki/Food%20browning,"Browning is the process of food turning brown due to the chemical reactions that take place within. The process of browning is one of the chemical reactions that take place in food chemistry and represents an interesting research topic regarding health, nutrition, and food technology. Though there are many different ways food chemically changes over time, browning in particular falls into two main categories: enzymatic versus non-enzymatic browning processes. Browning has many important implications on the food industry relating to nutrition, technology, and economic cost. Researchers are especially interested in studying the control (inhibition) of browning and the different methods that can be employed to maximize this inhibition and ultimately prolong the shelf life of food. Enzymatic browning Enzymatic browning is one of the most important reactions that takes place in most fruits and vegetables as well as in seafood. These processes affect the taste, color, and value of such foods. Generally, it is a chemical reaction involving polyphenol oxidase (PPO), catechol oxidase, and other enzymes that create melanins and benzoquinone from natural phenols. Enzymatic browning (also called oxidation of foods) requires exposure to oxygen. It begins with the oxidation of phenols by polyphenol oxidase into quinones, whose strong electrophilic state causes high susceptibility to a nucleophilic attack from other proteins. These quinones are then polymerized in a series of reactions, eventually resulting in the formation of brown pigments (melanosis) on the surface of the food. The rate of enzymatic browning is reflected by the amount of active polyphenol oxidases present in the food. Hence, most research into methods of preventing enzymatic browning has been directed towards inhibiting polyphenol oxidase activity. However, not all browning of food produces negative effects. Examples of beneficial enzymatic browning: Developing color and flavor in coffee, cocoa beans, a" https://en.wikipedia.org/wiki/Cache%20hierarchy,"Cache hierarchy, or multi-level caches, refers to a memory architecture that uses a hierarchy of memory stores based on varying access speeds to cache data. Highly requested data is cached in high-speed access memory stores, allowing swifter access by central processing unit (CPU) cores. Cache hierarchy is a form and part of memory hierarchy and can be considered a form of tiered storage. This design was intended to allow CPU cores to process faster despite the memory latency of main memory access. Accessing main memory can act as a bottleneck for CPU core performance as the CPU waits for data, while making all of main memory high-speed may be prohibitively expensive. High-speed caches are a compromise allowing high-speed access to the data most-used by the CPU, permitting a faster CPU clock. Background In the history of computer and electronic chip development, there was a period when increases in CPU speed outpaced the improvements in memory access speed. The gap between the speed of CPUs and memory meant that the CPU would often be idle. CPUs were increasingly capable of running and executing larger amounts of instructions in a given time, but the time needed to access data from main memory prevented programs from fully benefiting from this capability. This issue motivated the creation of memory models with higher access rates in order to realize the potential of faster processors. This resulted in the concept of cache memory, first proposed by Maurice Wilkes, a British computer scientist at the University of Cambridge in 1965. He called such memory models ""slave memory"". Between roughly 1970 and 1990, papers and articles by Anant Agarwal, Alan Jay Smith, Mark D. Hill, Thomas R. Puzak, and others discussed better cache memory designs. The first cache memory models were implemented at the time, but even as researchers were investigating and proposing better designs, the need for faster memory models continued. This need resulted from the fact that although ear" https://en.wikipedia.org/wiki/Cache%20control%20instruction,"In computing, a cache control instruction is a hint embedded in the instruction stream of a processor intended to improve the performance of hardware caches, using foreknowledge of the memory access pattern supplied by the programmer or compiler. They may reduce cache pollution, reduce bandwidth requirement, bypass latencies, by providing better control over the working set. Most cache control instructions do not affect the semantics of a program, although some can. Examples Several such instructions, with variants, are supported by several processor instruction set architectures, such as ARM, MIPS, PowerPC, and x86. Prefetch Also termed data cache block touch, the effect is to request loading the cache line associated with a given address. This is performed by the PREFETCH instruction in the x86 instruction set. Some variants bypass higher levels of the cache hierarchy, which is useful in a 'streaming' context for data that is traversed once, rather than held in the working set. The prefetch should occur sufficiently far ahead in time to mitigate the latency of memory access, for example in a loop traversing memory linearly. The GNU Compiler Collection intrinsic function __builtin_prefetch can be used to invoke this in the programming languages C or C++. Instruction prefetch A variant of prefetch for the instruction cache. Data cache block allocate zero This hint is used to prepare cache lines before overwriting the contents completely. In this example, the CPU needn't load anything from main memory. The semantic effect is equivalent to an aligned memset of a cache-line sized block to zero, but the operation is effectively free. Data cache block invalidate This hint is used to discard cache lines, without committing their contents to main memory. Care is needed since incorrect results are possible. Unlike other cache hints, the semantics of the program are significantly modified. This is used in conjunction with allocate zero for managing temporary data. " https://en.wikipedia.org/wiki/Sampling%20%28medicine%29,"In medicine, sampling is gathering of matter from the body to aid in the process of a medical diagnosis and/or evaluation of an indication for treatment, further medical tests or other procedures. In this sense, the sample is the gathered matter, and the sampling tool or sampler is the person or material to collect the sample. Sampling is a prerequisite for many medical tests, but generally not for medical history, physical examination and radiologic tests. By sampling technique Obtaining excretions or materials that leave the body anyway, such as urine, stool, sputum, or vomitus, by direct collection as they exit. A sample of saliva can also be collected from the mouth. Excision (cutting out), a surgical method for the removal of solid or soft tissue samples. Puncture (also called centesis) followed by aspiration is the main method used for sampling of many types of tissues and body fluids. Examples are thoracocentesis to sample pleural fluid, and amniocentesis to sample amniotic fluid. The main method of centesis, in turn, is fine needle aspiration, but there are also somewhat differently designed needles, such as for bone marrow aspiration. Puncture without aspiration may suffice in, for example, capillary blood sampling. Scraping or swiping. In a Pap test, cells are scraped off a uterine cervix with a special spatula and brush or a special broom device that is inserted through a vagina without having to puncture any tissue. Epithelial cells for DNA testing can be obtained by swiping the inside of a cheek in a mouth with a swab. Biopsy or cytopathology In terms of sampling technique, a biopsy generally refers to a preparation where the normal tissue structure is preserved, availing for examination of both individual cells and their organization for the study of histology, while a sample for cytopathology is prepared primarily for the examination of individual cells, not necessarily preserving the tissue structure. Examples of biopsy procedures are bone ma" https://en.wikipedia.org/wiki/Formula,"In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a chemical formula. The informal use of the term formula in science refers to the general construct of a relationship between given quantities. The plural of formula can be either formulas (from the most common English plural noun form) or, under the influence of scientific Latin, formulae (from the original Latin). In mathematics In mathematics, a formula generally refers to an equation relating one mathematical expression to another, with the most important ones being mathematical theorems. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion. However, having done this once in terms of some parameter (the radius for example), mathematicians have produced a formula to describe the volume of a sphere in terms of its radius: Having obtained this result, the volume of any sphere can be computed as long as its radius is known. Here, notice that the volume V and the radius r are expressed as single letters instead of words or phrases. This convention, while less important in a relatively simple formula, means that mathematicians can more quickly manipulate formulas which are larger and more complex. Mathematical formulas are often algebraic, analytical or in closed form. In a general context, formulas are often a manifestation of mathematical model to real world phenomena, and as such can be used to provide solution (or approximated solution) to real world problems, with some being more general than others. For example, the formula is an expression of Newton's second law, and is applicable to a wide range of physical situations. Other formulas, such as the use of the equation of a sine curve to model the movement of the tides in a bay, may be created to solve a particular problem. In all cases, however, formulas form the basis for calculations. Expr" https://en.wikipedia.org/wiki/EEG%20analysis,"EEG analysis is exploiting mathematical signal analysis methods and computer technology to extract information from electroencephalography (EEG) signals. The targets of EEG analysis are to help researchers gain a better understanding of the brain; assist physicians in diagnosis and treatment choices; and to boost brain-computer interface (BCI) technology. There are many ways to roughly categorize EEG analysis methods. If a mathematical model is exploited to fit the sampled EEG signals, the method can be categorized as parametric, otherwise, it is a non-parametric method. Traditionally, most EEG analysis methods fall into four categories: time domain, frequency domain, time-frequency domain, and nonlinear methods. There are also later methods including deep neural networks (DNNs). Methods Frequency domain methods Frequency domain analysis, also known as spectral analysis, is the most conventional yet one of the most powerful and standard methods for EEG analysis. It gives insight into information contained in the frequency domain of EEG waveforms by adopting statistical and Fourier Transform methods. Among all the spectral methods, power spectral analysis is the most commonly used, since the power spectrum reflects the 'frequency content' of the signal or the distribution of signal power over frequency. Time domain methods There are two important methods for time domain EEG analysis: Linear Prediction and Component Analysis. Generally, Linear Prediction gives the estimated value equal to a linear combination of the past output value with the present and past input value. And Component Analysis is an unsupervised method in which the data set is mapped to a feature set. Notably, the parameters in time domain methods are entirely based on time, but they can also be extracted from statistical moments of the power spectrum. As a result, time domain method builds a bridge between physical time interpretation and conventional spectral analysis. Besides, time domain met" https://en.wikipedia.org/wiki/Model-driven%20security,"Model-driven security (MDS) means applying model-driven approaches (and especially the concepts behind model-driven software development) to security. Development of the concept The general concept of Model-driven security in its earliest forms has been around since the late 1990s (mostly in university research), and was first commercialized around 2002. There is also a body of later scientific research in this area, which continues to this day. A more specific definition of Model-driven security specifically applies model-driven approaches to automatically generate technical security implementations from security requirements models. In particular, ""Model driven security (MDS) is the tool supported process of modelling security requirements at a high level of abstraction, and using other information sources available about the system (produced by other stakeholders). These inputs, which are expressed in Domain Specific Languages (DSL), are then transformed into enforceable security rules with as little human intervention as possible. MDS explicitly also includes the run-time security management (e.g. entitlements/authorisations), i.e. run-time enforcement of the policy on the protected IT systems, dynamic policy updates and the monitoring of policy violations."" Model-driven security is also well-suited for automated auditing, reporting, documenting, and analysis (e.g. for compliance and accreditation), because the relationships between models and technical security implementations are traceably defined through the model-transformations. Opinions of industry analysts Several industry analyst sources state that MDS ""will have a significant impact as information security infrastructure is required to become increasingly real-time, automated and adaptive to changes in the organisation and its environment"". Many information technology architectures today are built to support adaptive changes (e.g. Service Oriented Architectures (SOA) and so-called Platform-as-a-" https://en.wikipedia.org/wiki/Security%20domain,"A security domain is the determining factor in the classification of an enclave of servers/computers. A network with a different security domain is kept separate from other networks. For example, NIPRNet, SIPRNet, JWICS, and NSANet are all kept separate. A security domain is considered to be an application or collection of applications that all trust a common security token for authentication, authorization or session management. Generally speaking, a security token is issued to a user after the user has actively authenticated with a user ID and password to the security domain. Examples of a security domain include: All the web applications that trust a session cookie issued by a Web Access Management product All the Windows applications and services that trust a Kerberos ticket issued by Active Directory In an identity federation that spans two different organizations that share a business partner, customer or business process outsourcing relation – a partner domain would be another security domain with which users and applications (from the local security domain) interact. Computer networking" https://en.wikipedia.org/wiki/Masreliez%27s%20theorem,"Masreliez theorem describes a recursive algorithm within the technology of extended Kalman filter, named after the Swedish-American physicist John Masreliez, who is its author. The algorithm estimates the state of a dynamic system with the help of often incomplete measurements marred by distortion. Masreliez's theorem produces estimates that are quite good approximations to the exact conditional mean in non-Gaussian additive outlier (AO) situations. Some evidence for this can be had by Monte Carlo simulations. The key approximation property used to construct these filters is that the state prediction density is approximately Gaussian. Masreliez discovered in 1975 that this approximation yields an intuitively appealing non-Gaussian filter recursions, with data dependent covariance (unlike the Gaussian case) this derivation also provides one of the nicest ways of establishing the standard Kalman filter recursions. Some theoretical justification for use of the Masreliez approximation is provided by the ""continuity of state prediction densities"" theorem in Martin (1979). See also Control engineering Hidden Markov model Bayes' theorem Robust optimization Probability theory Nyquist–Shannon sampling theorem" https://en.wikipedia.org/wiki/Quasistatic%20approximation,"Quasistatic approximation(s) refers to different domains and different meanings. In the most common acceptance, quasistatic approximation refers to equations that keep a static form (do not involve time derivatives) even if some quantities are allowed to vary slowly with time. In electromagnetism it refers to mathematical models that can be used to describe devices that do not produce significant amounts of electromagnetic waves. For instance the capacitor and the coil in electrical networks. Overview The quasistatic approximation can be understood through the idea that the sources in the problem change sufficiently slowly that the system can be taken to be in equilibrium at all times. This approximation can then be applied to areas such as classical electromagnetism, fluid mechanics, magnetohydrodynamics, thermodynamics, and more generally systems described by hyperbolic partial differential equations involving both spatial and time derivatives. In simple cases, the quasistatic approximation is allowed when the typical spatial scale divided by the typical temporal scale is much smaller than the characteristic velocity with which information is propagated. The problem gets more complicated when several length and time scales are involved. In the strict acceptance of the term the quasistatic case corresponds to a situation where all time derivatives can be neglected. However some equations can be considered as quasistatic while others are not, leading to a system still being dynamic. There is no general consensus in such cases. Fluid dynamics In fluid dynamics, only quasi-hydrostatics (where no time derivative term is present) is considered as a quasi-static approximation. Flows are usually considered as dynamic as well as acoustic waves propagation. Thermodynamics In thermodynamics, a distinction between quasistatic regimes and dynamic ones is usually made in terms of equilibrium thermodynamics versus non-equilibrium thermodynamics. As in electromagnetism" https://en.wikipedia.org/wiki/Frame%20%28linear%20algebra%29,"In linear algebra, a frame of an inner product space is a generalization of a basis of a vector space to sets that may be linearly dependent. In the terminology of signal processing, a frame provides a redundant, stable way of representing a signal. Frames are used in error detection and correction and the design and analysis of filter banks and more generally in applied mathematics, computer science, and engineering. Definition and motivation Motivating example: computing a basis from a linearly dependent set Suppose we have a set of vectors in the vector space V and we want to express an arbitrary element as a linear combination of the vectors , that is, we want to find coefficients such that If the set does not span , then such coefficients do not exist for every such . If spans and also is linearly independent, this set forms a basis of , and the coefficients are uniquely determined by . If, however, spans but is not linearly independent, the question of how to determine the coefficients becomes less apparent, in particular if is of infinite dimension. Given that spans and is linearly dependent, one strategy is to remove vectors from the set until it becomes linearly independent and forms a basis. There are some problems with this plan: Removing arbitrary vectors from the set may cause it to be unable to span before it becomes linearly independent. Even if it is possible to devise a specific way to remove vectors from the set until it becomes a basis, this approach may become unfeasible in practice if the set is large or infinite. In some applications, it may be an advantage to use more vectors than necessary to represent . This means that we want to find the coefficients without removing elements in . The coefficients will no longer be uniquely determined by . Therefore, the vector can be represented as a linear combination of in more than one way. Formal definition Let V be an inner product space and be a set of vectors in . Th" https://en.wikipedia.org/wiki/Mathematics%20and%20art,"Mathematics and art are related in a variety of ways. Mathematics has itself been described as an art motivated by beauty. Mathematics can be discerned in arts such as music, dance, painting, architecture, sculpture, and textiles. This article focuses, however, on mathematics in the visual arts. Mathematics and art have a long historical relationship. Artists have used mathematics since the 4th century BC when the Greek sculptor Polykleitos wrote his Canon, prescribing proportions conjectured to have been based on the ratio 1: for the ideal male nude. Persistent popular claims have been made for the use of the golden ratio in ancient art and architecture, without reliable evidence. In the Italian Renaissance, Luca Pacioli wrote the influential treatise De divina proportione (1509), illustrated with woodcuts by Leonardo da Vinci, on the use of the golden ratio in art. Another Italian painter, Piero della Francesca, developed Euclid's ideas on perspective in treatises such as De Prospectiva Pingendi, and in his paintings. The engraver Albrecht Dürer made many references to mathematics in his work Melencolia I. In modern times, the graphic artist M. C. Escher made intensive use of tessellation and hyperbolic geometry, with the help of the mathematician H. S. M. Coxeter, while the De Stijl movement led by Theo van Doesburg and Piet Mondrian explicitly embraced geometrical forms. Mathematics has inspired textile arts such as quilting, knitting, cross-stitch, crochet, embroidery, weaving, Turkish and other carpet-making, as well as kilim. In Islamic art, symmetries are evident in forms as varied as Persian girih and Moroccan zellige tilework, Mughal jali pierced stone screens, and widespread muqarnas vaulting. Mathematics has directly influenced art with conceptual tools such as linear perspective, the analysis of symmetry, and mathematical objects such as polyhedra and the Möbius strip. Magnus Wenninger creates colourful stellated polyhedra, originally as models for te" https://en.wikipedia.org/wiki/Plant%20litter,"Plant litter (also leaf litter, tree litter, soil litter, litterfall or duff) is dead plant material (such as leaves, bark, needles, twigs, and cladodes) that have fallen to the ground. This detritus or dead organic material and its constituent nutrients are added to the top layer of soil, commonly known as the litter layer or O horizon (""O"" for ""organic""). Litter is an important factor in ecosystem dynamics, as it is indicative of ecological productivity and may be useful in predicting regional nutrient cycling and soil fertility. Characteristics and variability Litterfall is characterized as fresh, undecomposed, and easily recognizable (by species and type) plant debris. This can be anything from leaves, cones, needles, twigs, bark, seeds/nuts, logs, or reproductive organs (e.g. the stamen of flowering plants). Items larger than 2 cm diameter are referred to as coarse litter, while anything smaller is referred to as fine litter or litter. The type of litterfall is most directly affected by ecosystem type. For example, leaf tissues account for about 70 percent of litterfall in forests, but woody litter tends to increase with forest age. In grasslands, there is very little aboveground perennial tissue so the annual litterfall is very low and quite nearly equal to the net primary production. In soil science, soil litter is classified in three layers, which form on the surface of the O Horizon. These are the L, F, and H layers: The litter layer is quite variable in its thickness, decomposition rate and nutrient content and is affected in part by seasonality, plant species, climate, soil fertility, elevation, and latitude. The most extreme variability of litterfall is seen as a function of seasonality; each individual species of plant has seasonal losses of certain parts of its body, which can be determined by the collection and classification of plant litterfall throughout the year, and in turn affects the thickness of the litter layer. In tropical environments, " https://en.wikipedia.org/wiki/Graphics%20processing%20unit,"A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing (either on a video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles). After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining. History 1970s Arcade system boards have used specialized graphics circuits since the 1970s. In early video game hardware, RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. A specialized barrel shifter circuit helped the CPU animate the framebuffer graphics for various 1970s arcade video games from Midway and Taito, such as Gun Fight (1975), Sea Wolf (1976), and Space Invaders (1978). The Namco Galaxian arcade system in 1979 used specialized graphics hardware that supported RGB color, multi-colored sprites, and tilemap backgrounds. The Galaxian hardware was widely used during the golden age of arcade video games, by game companies such as Namco, Centuri, Gremlin, Irem, Konami, Midway, Nichibutsu, Sega, and Taito. The Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor. Atari 8-bit computers (1979) had ANTIC, a video processor which interpreted instructions describing a ""display list""—the way the scan lines map to specific bitmapped or character modes and where the memory is stored (so there did not need to be a contiguous frame buffer). 6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC also supported smooth vertical and horizontal scrolling independent of the CPU. 1980s The NEC µPD7220 was the first implementation of a personal computer graphics display processor as a single large" https://en.wikipedia.org/wiki/Logic%20block,"In computing, a logic block or configurable logic block (CLB) is a fundamental building block of field-programmable gate array (FPGA) technology. Logic blocks can be configured by the engineer to provide reconfigurable logic gates. Logic blocks are the most common FPGA architecture, and are usually laid out within a logic block array. Logic blocks require I/O pads (to interface with external signals), and routing channels (to interconnect logic blocks). Programmable logic blocks were invented by David W. Page and LuVerne R. Peterson, and defined within their 1985 patents. Applications An application circuit must be mapped into an FPGA with adequate resources. While the number of logic blocks and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic. For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs. FPGAs are also widely used for systems validation including pre-silicon validation, post-silicon validation, and firmware development. This allows chip companies to validate their design before the chip is produced in the factory, reducing the time-to-market. Architecture In general, a logic block consists of a few logic cells (each cell is called an adaptive logic module (ALM), a logic element (LE), slice, etc.). A typical cell consists of a 4-input LUT, a full adder (FA), and a D-type flip-flop (DFF), as shown to the right. The LUTs are in this figure split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT th" https://en.wikipedia.org/wiki/Z-value%20%28temperature%29,"""F0"" is defined as the number of equivalent minutes of steam sterilization at temperature 121.1 °C (250 °F) delivered to a container or unit of product calculated using a z-value of 10 °C. The term F-value or ""FTref/z"" is defined as the equivalent number of minutes to a certain reference temperature (Tref) for a certain control microorganism with an established Z-value. Z-value is a term used in microbial thermal death time calculations. It is the number of degrees the temperature has to be increased to achieve a tenfold (i.e. 1 log10) reduction in the D-value. The D-value of an organism is the time required in a given medium, at a given temperature, for a ten-fold reduction in the number of organisms. It is useful when examining the effectiveness of thermal inactivations under different conditions, for example in food cooking and preservation. The z-value is a measure of the change of the D-value with varying temperature, and is a simplified version of an Arrhenius equation and it is equivalent to z=2.303 RT Tref/E. The z-value of an organism in a particular medium is the temperature change required for the D-value to change by a factor of ten, or put another way, the temperature required for the thermal destruction curve to move one log cycle. It is the reciprocal of the slope resulting from the plot of the logarithm of the D-value versus the temperature at which the D-value was obtained. While the D-value gives the time needed at a certain temperature to kill 90% of the organisms, the z-value relates the resistance of an organism to differing temperatures. The z-value allows calculation of the equivalency of two thermal processes, if the D-value and the z-value are known. Example: if it takes an increase of 10 °C (18 °F) to move the curve one log, then our z-value is 10. Given a D-value of 4.5 minutes at 150 °C, the D-value can be calculated for 160 °C by reducing the time by 1 log. The new D-value for 160 °C given the z-value is 0.45 minutes. This means" https://en.wikipedia.org/wiki/Rhythm%20of%20Structure,"Rhythm of Structure is a multimedia interdisciplinary project founded in 2003. It features a series of exhibitions, performances, and academic projects that explore the interconnecting structures and process of mathematics and art, and language, as way to advance a movement of mathematical expression across the arts, across creative collaborative communities celebrating the rhythm and patterns of both ideas of the mind and the physical reality of nature. Introduction Rhythm of Structure, as an expanding series of art exhibitions, performances, videos/films and publications created and curated by multimedia mathematical artist and writer John Sims, explores and celebrates the intersecting structures of mathematics, art, community, and nature. Sims also created Recoloration Proclamation featuring the installation, The Proper Way to Hang a Confederate Flag (2004). From his catalog essay from the Rhythm of Structure: Mathematics, Art and Poetic exhibition, Sims sets the curatorial theme where he writes: ""Mathematics, as a parameter of human consciousness in an indispensable conceptual technology, essential is seeing beyond the retinal and knowing beyond the intuitive. The language and process of mathematics, as elements of, foundation for art, inform an analytic expressive condition that inspires a visual reckoning for a convergence: from the illustrative to the metaphysical to the poetic. And in the dialectic of visual art call and text performative response, there is an inter-dimensional conversation where the twisting structures of language, vision and human ways give birth to the spiritual lattice of a social geometry, a community constructivism -- a place of connections, where emotional calculations meet spirited abstraction."" First premiering at the Fire Patrol No.5 Gallery in 2003, with the show Rhythm of Structure: MathArt in Harlem. This interdisciplinary project has featured numerous exhibitions around the country collaborating with many notable artists, wr" https://en.wikipedia.org/wiki/Networked%20Robotics%20Corporation,"Networked Robotics Corporation is an American scientific automation company that designs and manufactures electronic devices that monitor scientific instruments, scientific processes, and environmental conditions via the internet. Networked Robotics is an Illinois company but is now based largely out of Pleasanton, California, the company is focused on the collection and integration of scientific data from FDA-regulated sources such as freezers, incubators, liquid nitrogen cryopreservation freezers, rooms, shakers, clean rooms, and scales. Monitored parameters include temperature, gas concentrations, liquid levels, voltages, pressure, rotation, humidity, weight, and many others. Scientific instruments speak different data languages. The company integrates data collection by using unique network hardware that speaks the unique digital and electronic languages of scientific instruments and sensors from different vendors and converting those individual languages to a common one on the network. Networked Robotics produces their own line of digital sensors for scientific data sources where digital outputs are not available. The company can be considered to be an Internet of Things provider. Networked Robotics technology is used in the biotechnologies industry—including stem cell automation, medical industry, academia, food industry in efforts to enhance U.S. Food and Drug Administration (FDA) regulatory compliance, quality, and loss prevention for their operations. The company sells a series of proprietary hardware products for network data collection. The NTMS4 networking hardware is their flagship product which serves as a data collection and ""automation hub"". The company's data collection and monitoring software, the Tempurity™ System, is free to customers. The company also provides regulatory services for companies that are performing regulated, especially FDA-regulated scientific research. History Networked Robotics was founded in 2004 at the Northwestern " https://en.wikipedia.org/wiki/Monokub,"Monokub () is a computer motherboard based on the Russian Elbrus 2000 computer architecture, which form the basis for the Monoblock PC office workstation. The motherboard has a miniITX formfactor and contains a single Elbrus-2C+ microprocessor with a clock frequency of 500 MHz. The memory controller provides a dual-channel memory mode. The board has two DDR2-800 memory slots, which enables up to 16 GB of RAM memory (using ECC modules). It also supports expansion boards using PCI Express x16 bus. In addition there is an on-board Gigabit Ethernet interface, 4 USB 2.0, RS-232 interface, DVI connector and audio input/output ports." https://en.wikipedia.org/wiki/Quipu,"Quipu (also spelled khipu) are recording devices fashioned from strings historically used by a number of cultures in the region of Andean South America. A quipu usually consisted of cotton or camelid fiber strings. The Inca people used them for collecting data and keeping records, monitoring tax obligations, collecting census records, calendrical information, and for military organization. The cords stored numeric and other values encoded as knots, often in a base ten positional system. A quipu could have only a few or thousands of cords. The configuration of the quipus has been ""compared to string mops."" Archaeological evidence has also shown the use of finely carved wood as a supplemental, and perhaps sturdier, base to which the color-coded cords would be attached. A relatively small number have survived. Objects that can be identified unambiguously as quipus first appear in the archaeological record in the first millennium AD (though debated quipus are much earlier). They subsequently played a key part in the administration of the Kingdom of Cusco and later the Inca Empire, flourishing across the Andes from c. 1100 to 1532 AD. As the region was subsumed under the Spanish Empire, quipus were mostly replaced by European writing and numeral systems, and most quipu were identified as idolatrous and destroyed, but some Spaniards promoted the adaptation of the quipu recording system to the needs of the colonial administration, and some priests advocated the use of quipus for ecclesiastical purposes. In several modern villages, quipus have continued to be important items for the local community. It is unclear how many intact quipus still exist and where, as many have been stored away in mausoleums. Knotted strings unrelated to quipu have been used to record information by the ancient Chinese, Tibetans and Japanese. Quipu is the Spanish spelling and the most common spelling in English. Khipu (pronounced , plural: khipukuna) is the word for ""knot"" in Cusco Quechua. I" https://en.wikipedia.org/wiki/The%20Chemical%20Basis%20of%20Morphogenesis,"""The Chemical Basis of Morphogenesis"" is an article that the English mathematician Alan Turing wrote in 1952. It describes how patterns in nature, such as stripes and spirals, can arise naturally from a homogeneous, uniform state. The theory, which can be called a reaction–diffusion theory of morphogenesis, has become a basic model in theoretical biology. Such patterns have come to be known as Turing patterns. For example, it has been postulated that the protein VEGFC can form Turing patterns to govern the formation of lymphatic vessels in the zebrafish embryo. Reaction–diffusion systems Reaction–diffusion systems have attracted much interest as a prototype model for pattern formation. Patterns such as fronts, spirals, targets, hexagons, stripes and dissipative solitons are found in various types of reaction-diffusion systems in spite of large discrepancies e.g. in the local reaction terms. Such patterns have been dubbed ""Turing patterns"". Reaction–diffusion processes form one class of explanation for the embryonic development of animal coats and skin pigmentation. Another reason for the interest in reaction-diffusion systems is that although they represent nonlinear partial differential equations, there are often possibilities for an analytical treatment. See also Evolutionary developmental biology Turing pattern Symmetry breaking" https://en.wikipedia.org/wiki/Computer-on-module,"A computer-on-module (COM) is a type of single-board computer (SBC), a subtype of an embedded computer system. An extension of the concept of system on chip (SoC) and system in package (SiP), COM lies between a full-up computer and a microcontroller in nature. It is very similar to a system on module (SOM). Design COMs are complete embedded computers built on a single circuit board. The design is centered on a microprocessor with RAM, input/output controllers and all other features needed to be a functional computer on the one board. However, unlike a single-board computer, the COM usually lacks the standard connectors for any input/output peripherals to be attached directly to the board. The module usually needs to be mounted on a carrier board (or ""baseboard"") which breaks the bus out to standard peripheral connectors. Some COMs also include peripheral connectors. Some can be used without a carrier. A COM solution offers a dense package computer system for use in small or specialized applications requiring low power consumption or small physical size as is needed in embedded systems. As a COM is very compact and highly integrated, even complex CPUs, including multi-core technology, can be realized on a COM. Some devices also incorporate field-programmable gate array (FPGA) components. FPGA-based functions can be added as IP cores to the COM itself or to the carrier card. Using FPGA IP cores adds to the modularity of a COM concept because I/O functions can be adapted to special needs without extensive rewiring on the printed circuit board. A ""computer-on-module"" is also called a ""system-on-module"" (SOM). History The terms ""Computer-on-Module"" and ""COM"" were coined by VDC Research Group, Inc. (formerly Venture Development Corporation) to describe this class of embedded computer boards. Dr. Gordon Kruberg, founder and CEO of Gumstix, is credited for creating the first COM, predating the next recognizable COM entries by almost 18 months. Gumstix ARM Linux" https://en.wikipedia.org/wiki/Open%20security,"Open security is the use of open source philosophies and methodologies to approach computer security and other information security challenges. Traditional application security is based on the premise that any application or service (whether it is malware or desirable) relies on security through obscurity. Open source approaches have created technology such as Linux (and to some extent, the Android operating system). Additionally, open source approaches applied to documents have inspired wikis and their largest example, Wikipedia. Open security suggests that security breaches and vulnerabilities can be better prevented or ameliorated when users facing these problems collaborate using open source philosophies. This approach requires that users be legally allowed to collaborate, so relevant software would need to be released under a license that is widely accepted to be open source; examples include the Massachusetts Institute of Technology (MIT) license, the Apache 2.0 license, the GNU Lesser General Public License (LGPL), and the GNU General Public License (GPL). Relevant documents would need to be under a generally accepted ""open content"" license; these include Creative Commons Attribution (CC-BY) and Attribution Share Alike (CC-BY-SA) licenses, but not Creative Commons ""non-commercial"" licenses or ""no-derivative"" licenses. On the developer side, legitimate software and service providers can have independent verification and testing of their source code. On the information technology side, companies can aggregate common threats, patterns, and security solutions to a variety of security issues. See also Kerckhoffs's Principle OASIS (organization) (Organization for the Advancement of Structured Information Standards) OWASP (Open Web Application Security Project) Open government Homeland Open Security Technology Open source Open source software Open-source hardware" https://en.wikipedia.org/wiki/Imbibition,"Imbibition is a special type of diffusion that takes place when liquid is absorbed by solids-colloids causing an increase in volume. Water surface potential movement takes place along a concentration gradient; some dry materials absorb water. A gradient between the absorbent and the liquid is essential for imbibition. For a substance to imbibe a liquid, there must first be some attraction between them. Imbibition occurs when a wetting fluid displaces a non-wetting fluid, the opposite of drainage in which a non-wetting phase displaces the wetting fluid. The two processes are governed by different mechanisms. Imbibition is also a type of diffusion since water movement is along the concentration gradient. Seeds and other such materials have almost no water hence they absorb water easily. Water potential gradient between the absorbent and liquid imbibed is essential for imbibition. Examples One example of imbibition in nature is the absorption of water by hydrophilic colloids. Matrix potential contributes significantly to water in such substances. Dry seeds germinate in part by imbibition. Imbibition can also control circadian rhythms in Arabidopsis thaliana and (probably) other plants. The Amott test employs imbibition. Proteins have high imbibition capacities, so proteinaceous pea seeds swell more than starchy wheat seeds. Imbibition of water increases imbibant volume, which results in imbibitional pressure (IP). The magnitude of such pressure can be demonstrated by the splitting of rocks by inserting dry wooden stalks in their crevices and soaking them in water, a technique used by early Egyptians to cleave stone blocks. Skin grafts (split thickness and full thickness) receive oxygenation and nutrition via imbibition, maintaining cellular viability until the processes of inosculation and revascularisation have re-established a new blood supply within these tissues. Germination Examples include the absorption of water by seeds and dry wood. If there is no pre" https://en.wikipedia.org/wiki/Carrier%20frequency%20offset,"Carrier frequency offset (CFO) is one of many non-ideal conditions that may affect in baseband receiver design. In designing a baseband receiver, we should notice not only the degradation invoked by non-ideal channel and noise, we should also regard RF and analog parts as the main consideration. Those non-idealities include sampling clock offset, IQ imbalance, power amplifier, phase noise and carrier frequency offset nonlinearity. Carrier frequency offset often occurs when the local oscillator signal for down-conversion in the receiver does not synchronize with the carrier signal contained in the received signal. This phenomenon can be attributed to two important factors: frequency mismatch in the transmitter and the receiver oscillators; and the Doppler effect as the transmitter or the receiver is moving. When this occurs, the received signal will be shifted in frequency. For an OFDM system, the orthogonality among sub carriers is maintained only if the receiver uses a local oscillation signal that is synchronous with the carrier signal contained in the received signal. Otherwise, mismatch in carrier frequency can result in inter-carrier interference (ICI). The oscillators in the transmitter and the receiver can never be oscillating at identical frequency. Hence, carrier frequency offset always exists even if there is no Doppler effect. In a standard-compliant communication system, such as the IEEE 802.11 WLAN the oscillator precision tolerance is specified to be less than ±20 ppm, so that CFO is in the range from - 40 ppm to +40 ppm. Example If the TX oscillator runs at a frequency that is 20 ppm above the nominal frequency and if the RX oscillator is running at 20 ppm below, then the received baseband signal will have a CFO of 40 ppm. With a carrier frequency of 5.2 GHz in this standard, the CFO is up to ±208 kHz. In addition, if the transmitter or the receiver is moving, the Doppler effect adds some hundreds of hertz in frequency spreading. Compared to th" https://en.wikipedia.org/wiki/Index%20of%20fractal-related%20articles,"This is a list of fractal topics, by Wikipedia page, See also list of dynamical systems and differential equations topics. 1/f noise Apollonian gasket Attractor Box-counting dimension Cantor distribution Cantor dust Cantor function Cantor set Cantor space Chaos theory Coastline Constructal theory Dimension Dimension theory Dragon curve Fatou set Fractal Fractal antenna Fractal art Fractal compression Fractal flame Fractal landscape Fractal transform Fractint Graftal Iterated function system Horseshoe map How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension Julia set Koch snowflake L-system Lebesgue covering dimension Lévy C curve Lévy flight List of fractals by Hausdorff dimension Lorenz attractor Lyapunov fractal Mandelbrot set Menger sponge Minkowski–Bouligand dimension Multifractal analysis Olbers' paradox Perlin noise Power law Rectifiable curve Scale-free network Self-similarity Sierpinski carpet Sierpiński curve Sierpinski triangle Space-filling curve T-square (fractal) Topological dimension Fractals" https://en.wikipedia.org/wiki/Timeline%20of%20discovery%20of%20Solar%20System%20planets%20and%20their%20moons,"The timeline of discovery of Solar System planets and their natural satellites charts the progress of the discovery of new bodies over history. Each object is listed in chronological order of its discovery (multiple dates occur when the moments of imaging, observation, and publication differ), identified through its various designations (including temporary and permanent schemes), and the discoverer(s) listed. Historically the naming of moons did not always match the times of their discovery. Traditionally, the discoverer enjoys the privilege of naming the new object; however, some neglected to do so (E. E. Barnard stated he would ""defer any suggestions as to a name"" [for Amalthea] ""until a later paper"" but never got around to picking one from the numerous suggestions he received) or actively declined (S. B. Nicholson stated ""Many have asked what the new satellites [Lysithea and Carme] are to be named. They will be known only by the numbers X and XI, written in Roman numerals, and usually prefixed by the letter J to identify them with Jupiter.""). The issue arose nearly as soon as planetary satellites were discovered: Galileo referred to the four main satellites of Jupiter using numbers while the names suggested by his rival Simon Marius gradually gained universal acceptance. The International Astronomical Union (IAU) eventually started officially approving names in the late 1970s. With the explosion of discoveries in the 21st century, new moons have once again started to be left unnamed even after their numbering, beginning with Jupiter LI and Jupiter LII in 2010. Key info In the following tables, planetary satellites are indicated in bold type (e.g. Moon) while planets and dwarf planets, which directly circle the Sun, are in italic type (e.g. Earth). The Sun itself is indicated in roman type. The tables are sorted by publication/announcement date. Dates are annotated with the following symbols: i: for date of first imaging (photography, etc.); o: for date of fir" https://en.wikipedia.org/wiki/Mathematical%20Models%20%28Fischer%29,"Mathematical Models: From the Collections of Universities and Museums – Photograph Volume and Commentary is a book on the physical models of concepts in mathematics that were constructed in the 19th century and early 20th century and kept as instructional aids at universities. It credits Gerd Fischer as editor, but its photographs of models are also by Fischer. It was originally published by Vieweg+Teubner Verlag for their bicentennial in 1986, both in German (titled Mathematische Modelle. Aus den Sammlungen von Universitäten und Museen. Mit 132 Fotografien. Bildband und Kommentarband) and (separately) in English translation, in each case as a two-volume set with one volume of photographs and a second volume of mathematical commentary. Springer Spektrum reprinted it in a second edition in 2017, as a single dual-language volume. Topics The work consists of 132 full-page photographs of mathematical models, divided into seven categories, and seven chapters of mathematical commentary written by experts in the topic area of each category. These categories are: Wire and thread models, of hypercubes of various dimensions, and of hyperboloids, cylinders, and related ruled surfaces, described as ""elementary analytic geometry"" and explained by Fischer himself. Plaster and wood models of cubic and quartic algebraic surfaces, including Cayley's ruled cubic surface, the Clebsch surface, Fresnel's wave surface, the Kummer surface, and the Roman surface, with commentary by W. Barth and H. Knörrer. Wire and plaster models illustrating the differential geometry and curvature of curves and surfaces, including surfaces of revolution, Dupin cyclides, helicoids, and minimal surfaces including the Enneper surface, with commentary by M. P. do Carmo, G. Fischer, U. Pinkall, H. and Reckziegel. Surfaces of constant width including the surface of rotation of the Reuleaux triangle and the Meissner bodies, described by J. Böhm. Uniform star polyhedra, described by E. Quaisser. Models of the " https://en.wikipedia.org/wiki/Favard%20constant,"In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein. Particular values Uses This constant is used in solutions of several extremal problems, for example Favard's constant is the sharp constant in Jackson's inequality for trigonometric polynomials the sharp constants in the Landau–Kolmogorov inequality are expressed via Favard's constants Norms of periodic perfect splines." https://en.wikipedia.org/wiki/Umbilic%20torus,"The umbilic torus or umbilic bracelet is a single-edged 3-dimensional shape. The lone edge goes three times around the ring before returning to the starting point. The shape also has a single external face. A cross section of the surface forms a deltoid. The umbilic torus occurs in the mathematical subject of singularity theory, in particular in the classification of umbilical points which are determined by real cubic forms . The equivalence classes of such cubics form a three-dimensional real projective space and the subset of parabolic forms define a surface – the umbilic torus. Christopher Zeeman named this set the umbilic bracelet in 1976. The torus is defined by the following set of parametric equations. In sculpture John Robinson created a sculpture Eternity based on the shape in 1989, this had a triangular cross-section rather than a deltoid of a true Umbilic bracelet. This appeared on the cover of Geometric Differentiation by Ian R. Porteous. Helaman Ferguson has created a 27-inch (69 centimeters) bronze sculpture, Umbilic Torus, and it is his most widely known piece of art. In 2010, it was announced that Jim Simons had commissioned an Umbilic Torus sculpture to be constructed outside the Math and Physics buildings at Stony Brook University, in proximity to the Simons Center for Geometry and Physics. The torus is made out of cast bronze, and is mounted on a stainless steel column. The total weight of the sculpture is 65 tonnes, and has a height of . The torus has a diameter of , the same diameter as the granite base. Various mathematical formulas defining the torus are inscribed on the base. Installation was completed in September, 2012. In literature In the short story What Dead Men Tell by Theodore Sturgeon, the main action takes place in a seemingly endless corridor with the cross section of an equilateral triangle. At the end the protagonist speculates that the corridor is actually a triangular shape twisted back on itself like a Möbius strip but" https://en.wikipedia.org/wiki/Chip%20creep,"Chip creep refers to the problem of an integrated circuit (chip) working its way out of its socket over time. This was mainly an issue in early PCs. Chip creep occurs due to thermal expansion, which is expansion and contraction as the system heats up and cools down. It can also occur due to vibration. While chip creep was most common with older memory modules, it was also a problem with CPUs and other main chips that were inserted into sockets. An example is the Apple III, where its CPU would be dislodged and the user would need to reseat the chips. To fix chip creep, users of older systems would often have to remove the case cover and push the loose chip back into the socket. Today's computer systems are not as affected by chip creep, since chips are more securely held, either by various types of retainer clips or by being soldered into place, and since system cooling has improved." https://en.wikipedia.org/wiki/National%20Documentation%20Centre%20%28Greece%29,"The National Documentation Centre (EKT; ) is a Greek public organisation that promotes knowledge, research, innovation and digital transformation. It was established in 1980 with funding from the United Nations Development Programme with the aim to strengthen the collection and distribution of research-related material, and to ensure full accessibility to it. It has been designated as a National Scientific Infrastructure, a National Authority of the Hellenic Statistical System, and National Contact Point for European Research and Innovation Programmes. Since August 2019, it has been established as a discrete public-interest legal entity under private law, and is supervised by the Ministry of Digital Governance (Article 59 / Law 4623/2019). The management bodies of EKT are the Administrative Board and the Director who, since 2013, has been Dr. Evi Sachini. Goals EKT's institutional role is the collection, organisation, documentation, digital preservation and dissemination of scientific, research and cultural information, content and data produced in Greece. EKT’s specific objectives, as stated on its official website, focus, amongst others, on: Ensuring the dissemination of the country's scientific output. Meeting the needs of academia, policymakers and research and business communities for information and reliable data. Increasing the digital scientific and cultural content that is available in a user-friendly form and with legitimate rights of use for different target groups Promoting open access to publications and data in the academic and research communities. Collaboration with academic libraries for the standardization in organising and distributing metadata and digital scientific content. Collaboration and joint actions with libraries, archives, museums, scientific and cultural institutions which produce and manage content, focusing on the establishment of common interoperability standards and the availability of metadata and digital content. Provi" https://en.wikipedia.org/wiki/Neurochip,"A neurochip is an integrated circuit chip (such as a microprocessor) that is designed for interaction with neuronal cells. Formation It is made of silicon that is doped in such a way that it contains EOSFETs (electrolyte-oxide-semiconductor field-effect transistors) that can sense the electrical activity of the neurons (action potentials) in the above-standing physiological electrolyte solution. It also contains capacitors for the electrical stimulation of the neurons. The University of Calgary, Faculty of Medicine scientists led by Pakistani-born Canadian scientist Naweed Syed who proved it is possible to cultivate a network of brain cells that reconnect on a silicon chip—or the brain on a microchip—have developed new technology that monitors brain cell activity at a resolution never achieved before. Developed with the National Research Council Canada (NRC), the new silicon chips are also simpler to use, which will help future understanding of how brain cells work under normal conditions and permit drug discoveries for a variety of neurodegenerative diseases, such as Alzheimer's and Parkinson's. Naweed Syed's lab cultivated brain cells on a microchip. The new technology from the lab of Naweed Syed, in collaboration with the NRC, was published online in August 2010, in the journal, Biomedical Devices. It is the world's first neurochip. It is based on Syed's earlier experiments on neurochip technology dating back to 2003. ""This technical breakthrough means we can track subtle changes in brain activity at the level of ion channels and synaptic potentials, which are also the most suitable target sites for drug development in neurodegenerative diseases and neuropsychological disorders,"" says Syed, professor and head of the Department of Cell Biology and Anatomy, member of the Hotchkiss Brain Institute and advisor to the Vice President Research on Biomedical Engineering Initiative of the University of Chicago. The new neurochips are also automated, meaning that an" https://en.wikipedia.org/wiki/Non-Quasi%20Static%20model,"Non-Quasi Static model (NQS) is a transistor model used in analogue/mixed signal IC design. It becomes necessary to use an NQS model when the operational frequency of the device is in the range of its transit time. Normally, in a quasi-static (QS) model, voltage changes in the MOS transistor channel are assumed to be instantaneous. However, in an NQS model voltage changes relating to charge carriers are delayed." https://en.wikipedia.org/wiki/Trust%20boundary,"Trust boundary is a term used in computer science and security which describes a boundary where program data or execution changes its level of ""trust,"" or where two principals with different capabilities exchange data or commands. The term refers to any distinct boundary where within a system all sub-systems (including data) have equal trust. An example of an execution trust boundary would be where an application attains an increased privilege level (such as root). A data trust boundary is a point where data comes from an untrusted source--for example, user input or a network socket. A ""trust boundary violation"" refers to a vulnerability where computer software trusts data that has not been validated before crossing a boundary." https://en.wikipedia.org/wiki/Range%20state,"Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states. Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy. An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states. External links Bonn Convention (CMS) — Text of Convention Agreement Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species" https://en.wikipedia.org/wiki/List%20of%20abstract%20algebra%20topics,"Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings. Basic language Algebraic structures are defined primarily as sets with operations. Algebraic structure Subobjects: subgroup, subring, subalgebra, submodule etc. Binary operation Closure of an operation Associative property Distributive property Commutative property Unary operator Additive inverse, multiplicative inverse, inverse element Identity element Cancellation property Finitary operation Arity Structure preserving maps called homomorphisms are vital in the study of algebraic objects. Homomorphisms Kernels and cokernels Image and coimage Epimorphisms and monomorphisms Isomorphisms Isomorphism theorems There are several basic ways to combine algebraic objects of the same type to produce a third object of the same type. These constructions are used throughout algebra. Direct sum Direct limit Direct product Inverse limit Quotient objects: quotient group, quotient ring, quotient module etc. Tensor product Advanced concepts: Category theory Category of groups Category of abelian groups Category of rings Category of modules (over a fixed ring) Morita equivalence, Morita duality Category of vector spaces Homological algebra Filtration (algebra) Exact sequence Functor Zorn's lemma Semigroups and monoids Semigroup Subsemigroup Free semigroup Green's relations Inverse semigroup (or inversion semigroup, cf. ) Krohn–Rhodes theory Semigroup algebra Transformation semigroup Monoid Aperiodic monoid Free monoid Monoid (category theory) Monoid factorisation Syntacti" https://en.wikipedia.org/wiki/High%20throughput%20biology,"High throughput biology (or high throughput cell biology) is the use of automation equipment with classical cell biology techniques to address biological questions that are otherwise unattainable using conventional methods. It may incorporate techniques from optics, chemistry, biology or image analysis to permit rapid, highly parallel research into how cells function, interact with each other and how pathogens exploit them in disease. High throughput cell biology has many definitions, but is most commonly defined by the search for active compounds in natural materials like in medicinal plants. This is also known as high throughput screening (HTS) and is how most drug discoveries are made today, many cancer drugs, antibiotics, or viral antagonists have been discovered using HTS. The process of HTS also tests substances for potentially harmful chemicals that could be potential human health risks. HTS generally involves hundreds of samples of cells with the model disease and hundreds of different compounds being tested from a specific source. Most often a computer is used to determine when a compound of interest has a desired or interesting effect on the cell samples. Using this method has contributed to the discovery of the drug Sorafenib (Nexavar). Sorafenib is used as medication to treat multiple types of cancers, including renal cell carcinoma (RCC, cancer in the kidneys), hepatocellular carcinoma (liver cancer), and thyroid cancer. It helps stop cancer cells from reproducing by blocking the abnormal proteins present. In 1994, high throughput screening for this particular drug was completed. It was initially discovered by Bayer Pharmaceuticals in 2001. By using a RAF kinase biochemical assay, 200,000 compounds were screened from medicinal chemistry directed synthesis or combinatorial libraries to identify active molecules against activeRAF kinase. Following three trials of testing, it was found to have anti-angiogenic effects on the cancers, which stops the proc" https://en.wikipedia.org/wiki/Geothrix%20fermentans,"Geothrix fermentans is a rod-shaped, anaerobic bacterium. It is about 0.1 µm in diameter and ranges from 2-3 µm in length. Cell arrangement occurs singly and in chains. Geothrix fermentans can normally be found in aquatic sediments such as in aquifers. As an anaerobic chemoorganotroph, this organism is best known for its ability to use electron acceptors Fe(III), as well as other high potential metals. It also uses a wide range of substrates as electron donors. Research on metal reduction by G. fermentans has contributed to understanding more about the geochemical cycling of metals in the environment. Taxonomy history Geothrix fermentans was isolated from metal-contaminated waters of an aquifer in 1999 by John D. Coates from Southern Illinois University and by others from the University of Massachusetts. The novel strain was originally named ""Strain H-5T "". After classifying metabolism and confirming the presence and number of c-type cytochromes, Coates et al. proposed that the novel organism belongs to the newly recognized (1991) Halophoga-Acidobacterium phylum. Coates et al. also proposed a new name for the organism: ""Geothrix""- Greek for hair-like cell that comes from the Earth and ""fermentans""- Latin for ""fermenting."" Phylogeny Approaches based on 16s rRNA gene sequence comparison have allowed for detailed analyses of the affiliations of many bacterial groups. The phylogenetic affiliation of Geothrix fermentans as well as other soil bacteria such as Acidobacterium capsulatum and Holophoga foetida had not been established at the time of their initial isolation. More recent analysis 16s rRNA sequence data showed moderate similarity between these three genera supporting the likelihood that they may have differentiated from a common ancestor. Biology Geothrix fermentans is a rod-shaped strict anaerobe that can be found in aquatic soils in the Fe(III) reduction zone. As a strict anaerobe G. fermentans cannot grow in the presence of atmospheric oxygen that may be" https://en.wikipedia.org/wiki/Matriphagy,"Matriphagy is the consumption of the mother by her offspring. The behavior generally takes place within the first few weeks of life and has been documented in some species of insects, nematode worms, pseudoscorpions, and other arachnids as well as in caecilian amphibians. The specifics of how matriphagy occurs varies among different species. However, the process is best described in the Desert spider, Stegodyphus lineatus, where the mother harbors nutritional resources for her young through food consumption. The mother can regurgitate small portions of food for her growing offspring, but between 1–2 weeks after hatching the progeny capitalize on this food source by eating her alive. Typically, offspring only feed on their biological mother as opposed to other females in the population. In other arachnid species, matriphagy occurs after the ingestion of nutritional eggs known as trophic eggs (e.g. Black lace-weaver Amaurobius ferox, Crab spider Australomisidia ergandros). It involves different techniques for killing the mother, such as transfer of poison via biting and sucking to cause a quick death (e.g. Black lace-weaver) or continuous sucking of the hemolymph, resulting in a more gradual death (e.g. Crab spider). The behavior is less well described but follows a similar pattern in species such as the Hump earwig, pseudoscorpions, and caecilians. Spiders that engage in matriphagy produce offspring with higher weights, shorter and earlier moulting time, larger body mass at dispersal, and higher survival rates than clutches deprived of matriphagy. In some species, matriphagous offspring were also more successful at capturing large prey items and had a higher survival rate at dispersal. These benefits to offspring outweigh the cost of survival to the mothers and help ensure that her genetic traits are passed to the next generation, thus perpetuating the behavior. Overall, matriphagy is an extreme form of parental care but is highly related to extended care in the F" https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20electromagnetism,"The study of electromagnetism in higher education, as a fundamental part of both physics and engineering, is typically accompanied by textbooks devoted to the subject. The American Physical Society and the American Association of Physics Teachers recommend a full year of graduate study in electromagnetism for all physics graduate students. A joint task force by those organizations in 2006 found that in 76 of the 80 US physics departments surveyed, a course using John David Jackson's Classical Electrodynamics was required for all first year graduate students. For undergraduates, there are several widely used textbooks, including David Griffiths' Introduction to Electrodynamics and Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. Also at an undergraduate level, Richard Feynman's classic The Feynman Lectures on Physics is available online to read for free. Undergraduate There are several widely used undergraduate textbooks in electromagnetism, including David Griffiths' Introduction to Electrodynamics as well as Electricity and Magnetism by Edward Mills Purcell and D. J. Morin. The Feynman Lectures on Physics also include a volume on electromagnetism that is available to read online for free, through the California Institute of Technology. In addition, there are popular physics textbooks that include electricity and magnetism among the material they cover, such as David Halliday and Robert Resnick's Fundamentals of Physics. Graduate A 2006 report by a joint taskforce between the American Physical Society and the American Association of Physics Teachers found that 76 of the 80 physics departments surveyed require a first-year graduate course in John David Jackson's Classical Electrodynamics. This made Jackson's book the most popular textbook in any field of graduate-level physics, with Herbert Goldstein's Classical Mechanics as the second most popular with adoption at 48 universities. In a 2015 review of Andrew Zangwill's Modern Electrodynamics in" https://en.wikipedia.org/wiki/Iverson%20bracket,"In mathematics, the Iverson bracket, named after Kenneth E. Iverson, is a notation that generalises the Kronecker delta, which is the Iverson bracket of the statement . It maps any statement to a function of the free variables in that statement. This function is defined to take the value 1 for the values of the variables for which the statement is true, and takes the value 0 otherwise. It is generally denoted by putting the statement inside square brackets: In other words, the Iverson bracket of a statement is the indicator function of the set of values for which the statement is true. The Iverson bracket allows using capital-sigma notation without restriction on the summation index. That is, for any property of the integer , one can rewrite the restricted sum in the unrestricted form . With this convention, does not need to be defined for the values of for which the Iverson bracket equals ; that is, a summand must evaluate to 0 regardless of whether is defined. The notation was originally introduced by Kenneth E. Iverson in his programming language APL, though restricted to single relational operators enclosed in parentheses, while the generalisation to arbitrary statements, notational restriction to square brackets, and applications to summation, was advocated by Donald Knuth to avoid ambiguity in parenthesized logical expressions. Properties There is a direct correspondence between arithmetic on Iverson brackets, logic, and set operations. For instance, let A and B be sets and any property of integers; then we have Examples The notation allows moving boundary conditions of summations (or integrals) as a separate factor into the summand, freeing up space around the summation operator, but more importantly allowing it to be manipulated algebraically. Double-counting rule We mechanically derive a well-known sum manipulation rule using Iverson brackets: Summation interchange The well-known rule is likewise easily derived: Counting For instance, the" https://en.wikipedia.org/wiki/Khinchin%27s%20constant,"In number theory, Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x, coefficients ai of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x and is known as Khinchin's constant. That is, for it is almost always true that where is Khinchin's constant (with denoting the product over all sequence terms). Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. Among the numbers whose continued fraction expansions apparently do have this property (based on numerical evidence) are π, the Euler-Mascheroni constant γ, Apéry's constant ζ(3), and Khinchin's constant itself. However, this is unproven. Among the numbers x whose continued fraction expansions are known not to have this property are rational numbers, roots of quadratic equations (including the golden ratio Φ and the square roots of integers), and the base of the natural logarithm e. Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature. Sketch of proof The proof presented here was arranged by Czesław Ryll-Nardzewski and is much simpler than Khinchin's original proof which did not use ergodic theory. Since the first coefficient a0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval, i.e., those in . These numbers are in bijection with infinite continued fractions of the form [0; a1, a2, ...], which we simply write [a1, a2, ...], where a1, a2, ... are positive integers. Define a transformation T:I → I by The transformation T is called the Gauss–Kuzmin–Wirsing operator. For every Borel subset E of I, we also define the Gauss–Kuzmin measure of E Then μ is a probability measure on the σ-algebra of Borel subsets of I. The measure μ is equ" https://en.wikipedia.org/wiki/Mason%27s%20invariant,"In electronics, Mason's invariant, named after Samuel Jefferson Mason, is a measure of the quality of transistors. ""When trying to solve a seemingly difficult problem, Sam said to concentrate on the easier ones first; the rest, including the hardest ones, will follow,"" recalled Andrew Viterbi, co-founder and former vice-president of Qualcomm. He had been a thesis advisee under Samuel Mason at MIT, and this was one lesson he especially remembered from his professor. A few years earlier, Mason had heeded his own advice when he defined a unilateral power gain for a linear two-port device, or U. After concentrating on easier problems with power gain in feedback amplifiers, a figure of merit for all three-terminal devices followed that is still used today as Mason's Invariant. Origin In 1953, transistors were only five years old, and they were the only successful solid-state three-terminal active device. They were beginning to be used for RF applications, and they were limited to VHF frequencies and below. Mason wanted to find a figure of merit to compare transistors, and this led him to discover that the unilateral power gain of a linear two-port device was an invariant figure of merit. In his paper Power Gain in Feedback Amplifiers published in 1953, Mason stated in his introduction, ""A vacuum tube, very often represented as a simple transconductance driving a passive impedance, may lead to relatively simple amplifier designs in which the input impedance (and hence the power gain) is effectively infinite, the voltage gain is the quantity of interest, and the input circuit is isolated from the load. The transistor, however, usually cannot be characterized so easily."" He wanted to find a metric to characterize and measure the quality of transistors since up until then, no such measure existed. His discovery turned out to have applications beyond transistors. Derivation of U Mason first defined the device being studied with the three constraints listed below. T" https://en.wikipedia.org/wiki/Viewpoints%3A%20Mathematical%20Perspective%20and%20Fractal%20Geometry%20in%20Art,"Viewpoints: Mathematical Perspective and Fractal Geometry in Art is a textbook on mathematics and art. It was written by mathematicians Marc Frantz and Annalisa Crannell, and published in 2011 by the Princeton University Press (). The Basic Library List Committee of the Mathematical Association of America has recommended it for inclusion in undergraduate mathematics libraries. Topics The first seven chapters of the book concern perspectivity, while its final two concern fractals and their geometry. Topics covered within the chapters on perspectivity include coordinate systems for the plane and for Euclidean space, similarity, angles, and orthocenters, one-point and multi-point perspective, and anamorphic art. In the fractal chapters, the topics include self-similarity, exponentiation, and logarithms, and fractal dimension. Beyond this mathematical material, the book also describes methods for artists to depict scenes in perspective, and for viewers of art to understand the perspectives in the artworks they see, for instance by finding the optimal point from which to view an artwork. The chapters are ordered by difficulty, and begin with experiments that the students can perform on their own to motivate the material in each chapter. The book is heavily illustrated by artworks and photography (such as the landscapes of Ansel Adams) and includes a series of essays or interviews by contemporary artists on the mathematical content of their artworks. An appendix contains suggestions aimed at teachers of this material. Audience and reception Viewpoints is intended as a textbook for mathematics classes aimed at undergraduate liberal arts students, as a way to show these students how geometry can be used in their everyday life. However, it could even be used for high school art students, and reviewer Paul Kelley writes that ""it will be of value to anyone interested in an elementary introduction to the mathematics and practice of perspective drawing"". It differs from many " https://en.wikipedia.org/wiki/List%20of%20undecidable%20problems,"In computability theory, an undecidable problem is a type of computational problem that requires a yes/no answer, but where there cannot possibly be any computer program that always gives the correct answer; that is, any possible program would sometimes give the wrong answer or run forever without giving any answer. More formally, an undecidable problem is a problem whose language is not a recursive set; see the article Decidable language. There are uncountably many undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may be subsets of Turing recognizable languages: i.e., such undecidable languages may be recursively enumerable. Many, if not most, undecidable problems in mathematics can be posed as word problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not. For undecidability in axiomatic mathematics, see List of statements undecidable in ZFC. Problems in logic Hilbert's Entscheidungsproblem. Type inference and type checking for the second-order lambda calculus (or equivalent). Determining whether a first-order sentence in the logic of graphs can be realized by a finite undirected graph. Trakhtenbrot's theorem - Finite satisfiability is undecidable. Satisfiability of first order Horn clauses. Problems about abstract machines The halting problem (determining whether a Turing machine halts on a given input) and the mortality problem (determining whether it halts for every starting configuration). Determining whether a Turing machine is a busy beaver champion (i.e., is the longest-running among halting Turing machines with the same number of states and symbols). Rice's theorem states that for all nontrivial properties of partial functions, it is undecidable whether a given machine computes a partial function with that property. The halting problem for a Minsky machine: a finite-state automaton w" https://en.wikipedia.org/wiki/Upsampling,"In digital signal processing, upsampling, expansion, and interpolation are terms associated with the process of resampling in a multi-rate digital signal processing system. Upsampling can be synonymous with expansion, or it can describe an entire process of expansion and filtering (interpolation). When upsampling is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph). For example, if compact disc audio at 44,100 samples/second is upsampled by a factor of 5/4, the resulting sample-rate is 55,125. Upsampling by an integer factor Rate increase by an integer factor L can be explained as a 2-step process, with an equivalent implementation that is more efficient: Expansion: Create a sequence, comprising the original samples, separated by L − 1 zeros.  A notation for this operation is:  Interpolation: Smooth out the discontinuities with a lowpass filter, which replaces the zeros. In this application, the filter is called an interpolation filter, and its design is discussed below. When the interpolation filter is an FIR type, its efficiency can be improved, because the zeros contribute nothing to its dot product calculations. It is an easy matter to omit them from both the data stream and the calculations. The calculation performed by a multirate interpolating FIR filter for each output sample is a dot product: where the h[•] sequence is the impulse response of the interpolation filter, and K is the largest value of k for which h[j + kL] is non-zero. In the case L = 2, h[•] can be designed as a half-band filter, where almost half of the coefficients are zero and need not be included in the dot products. Impulse response coefficients taken at intervals of L form a subsequence, and there are L such subsequences (called phases) multiplexed together. Each of L phases of the impulse respons" https://en.wikipedia.org/wiki/Food%20Weekly%20News,"Food Weekly News is a weekly food science and agricultural newspaper reporting on the latest developments in research in food production. It is published by Vertical News, an imprint of NewsRx, LLC. External links Articles on HighBeam Research Food science Newspapers published in Atlanta Agricultural magazines Weekly newspapers published in the United States" https://en.wikipedia.org/wiki/Random%20Fibonacci%20sequence,"In mathematics, the random Fibonacci sequence is a stochastic analogue of the Fibonacci sequence defined by the recurrence relation , where the signs + or − are chosen at random with equal probability , independently for different . By a theorem of Harry Kesten and Hillel Furstenberg, random recurrent sequences of this kind grow at a certain exponential rate, but it is difficult to compute the rate explicitly. In 1999, Divakar Viswanath showed that the growth rate of the random Fibonacci sequence is equal to 1.1319882487943... , a mathematical constant that was later named Viswanath's constant. Description A random Fibonacci sequence is an integer random sequence given by the numbers for natural numbers , where and the subsequent terms are chosen randomly according to the random recurrence relation An instance of the random Fibonacci sequence starts with 1,1 and the value of the each subsequent term is determined by a fair coin toss: given two consecutive elements of the sequence, the next element is either their sum or their difference with probability 1/2, independently of all the choices made previously. If in the random Fibonacci sequence the plus sign is chosen at each step, the corresponding instance is the Fibonacci sequence (Fn), If the signs alternate in minus-plus-plus-minus-plus-plus-... pattern, the result is the sequence However, such patterns occur with vanishing probability in a random experiment. In a typical run, the terms will not follow a predictable pattern: Similarly to the deterministic case, the random Fibonacci sequence may be profitably described via matrices: where the signs are chosen independently for different n with equal probabilities for + or −. Thus where (Mk) is a sequence of independent identically distributed random matrices taking values A or B with probability 1/2: Growth rate Johannes Kepler discovered that as n increases, the ratio of the successive terms of the Fibonacci sequence (Fn) approaches the golden ratio wh" https://en.wikipedia.org/wiki/Jacobian,"In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to: Jacobian matrix and determinant Jacobian elliptic functions Jacobian variety Intermediate Jacobian Mathematical terminology" https://en.wikipedia.org/wiki/Comparison%20of%20instruction%20set%20architectures,"An instruction set architecture (ISA) is an abstract model of a computer, also referred to as computer architecture. A realization of an ISA is called an implementation. An ISA permits multiple implementations that may vary in performance, physical size, and monetary cost (among other things); because the ISA serves as the interface between software and hardware. Software that has been written for an ISA can run on different implementations of the same ISA. This has enabled binary compatibility between different generations of computers to be easily achieved, and the development of computer families. Both of these developments have helped to lower the cost of computers and to increase their applicability. For these reasons, the ISA is one of the most important abstractions in computing today. An ISA defines everything a machine language programmer needs to know in order to program a computer. What an ISA defines differs between ISAs; in general, ISAs define the supported data types, what state there is (such as the main memory and registers) and their semantics (such as the memory consistency and addressing modes), the instruction set (the set of machine instructions that comprises a computer's machine language), and the input/output model. Base In the early decades of computing, there were computers that used binary, decimal and even ternary. Contemporary computers are almost exclusively binary. Bits Computer architectures are often described as n-bit architectures. In the 20th century, n is often 8, 16, or 32, and in the 21st century, n is often 16, 32 or 64, but other sizes have been used (including 6, 12, 18, 24, 30, 36, 39, 48, 60, 128). This is actually a simplification as computer architecture often has a few more or less ""natural"" data sizes in the instruction set, but the hardware implementation of these may be very different. Many instruction set architectures have instructions that, on some implementations of that instruction set architecture, operat" https://en.wikipedia.org/wiki/Open%20JTAG,"The Open JTAG project is an open source project released under GNU License. It is a complete hardware and software JTAG reference design, based on a simple hardware composed by a FTDI FT245 USB front-end and an Altera EPM570 MAX II CPLD. The capabilities of this hardware configuration make the Open JTAG device able to output TCK signals at 24 MHz using macro-instructions sent from the host end. The scope is to give the community a JTAG device not based on the PC parallel port: Open JTAG uses the USB channel to communicate with the internal CPLD, sending macro-instructions as fast as possible. The complete project (Beta version) is available at OpenCores.org and the Open JTAG project official site." https://en.wikipedia.org/wiki/List%20of%20environmental%20sampling%20techniques,"Environmental sampling techniques are used in biology, ecology and conservation as part of scientific studies to learn about the flora and fauna of a particular area and establish a habitat's biodiversity, the abundance of species and the conditions in which these species live amongst other information. Where species are caught, researchers often then take the trapped organisms for further study in a lab or are documented by a researcher in the field before the animal is released. This information can then be used to better understand the environment, its ecology, the behaviour of species and how organisms interact with one another and their environment. Here is a list of some sampling techniques and equipment used in environmental sampling: Quadrats - used for plants and slow moving animals Techniques for Birds and/or Flying Invertebrates and/or Bats Malaise Trap Flight Interception Trap Harp Trap Robinson Trap Butterfly Net Mist Net Techniques for Terrestrial Animals Transect Tullgren Funnel - used for soil-living arthropods Pitfall Trap - used for small terrestrial animals like insects and amphibians Netting techniques for terrestrial animals Beating Net - used for insects dwelling in trees and shrubs Sweep Netting - used for insects in grasses Aspirator/Pooter - used for insects Camera Trap - used for larger animals Sherman Trap - used for small mammals See also Insect Collecting Wildlife Biology Sampling Sources Scientific method Survey methodology Scientific observation Biological techniques and tools" https://en.wikipedia.org/wiki/Network%20simulation,"In computer network research, network simulation is a technique whereby a software program replicates the behavior of a real network. This is achieved by calculating the interactions between the different network entities such as routers, switches, nodes, access points, links, etc. Most simulators use discrete event simulation in which the modeling of systems in which state variables change at discrete points in time. The behavior of the network and the various applications and services it supports can then be observed in a test lab; various attributes of the environment can also be modified in a controlled manner to assess how the network/protocols would behave under different conditions. Network simulator A network simulator is a software program that can predict the performance of a computer network or a wireless communication network. Since communication networks have become too complex for traditional analytical methods to provide an accurate understanding of system behavior, network simulators are used. In simulators, the computer network is modeled with devices, links, applications, etc., and the network performance is reported. Simulators come with support for the most popular technologies and networks in use today such as 5G, Internet of Things (IoT), Wireless LANs, mobile ad hoc networks, wireless sensor networks, vehicular ad hoc networks, cognitive radio networks, LTE etc. Simulations Most of the commercial simulators are GUI driven, while some network simulators are CLI driven. The network model/configuration describes the network (nodes, routers, switches, links) and the events (data transmissions, packet error, etc.). Output results would include network-level metrics, link metrics, device metrics etc. Further, drill down in terms of simulations trace files would also be available. Trace files log every packet, every event that occurred in the simulation and is used for analysis. Most network simulators use discrete event simulation, in which a lis" https://en.wikipedia.org/wiki/Porter%27s%20constant,"In mathematics, Porter's constant C arises in the study of the efficiency of the Euclidean algorithm. It is named after J. W. Porter of University College, Cardiff. Euclid's algorithm finds the greatest common divisor of two positive integers and . Hans Heilbronn proved that the average number of iterations of Euclid's algorithm, for fixed and averaged over all choices of relatively prime integers , is Porter showed that the error term in this estimate is a constant, plus a polynomially-small correction, and Donald Knuth evaluated this constant to high accuracy. It is: where is the Euler–Mascheroni constant is the Riemann zeta function is the Glaisher–Kinkelin constant See also Lochs' theorem Lévy's constant" https://en.wikipedia.org/wiki/Biomolecular%20engineering,"Biomolecular engineering is the application of engineering principles and practices to the purposeful manipulation of molecules of biological origin. Biomolecular engineers integrate knowledge of biological processes with the core knowledge of chemical engineering in order to focus on molecular level solutions to issues and problems in the life sciences related to the environment, agriculture, energy, industry, food production, biotechnology and medicine. Biomolecular engineers purposefully manipulate carbohydrates, proteins, nucleic acids and lipids within the framework of the relation between their structure (see: nucleic acid structure, carbohydrate chemistry, protein structure,), function (see: protein function) and properties and in relation to applicability to such areas as environmental remediation, crop and livestock production, biofuel cells and biomolecular diagnostics. The thermodynamics and kinetics of molecular recognition in enzymes, antibodies, DNA hybridization, bio-conjugation/bio-immobilization and bioseparations are studied. Attention is also given to the rudiments of engineered biomolecules in cell signaling, cell growth kinetics, biochemical pathway engineering and bioreactor engineering. Timeline History During World War II, the need for large quantities of penicillin of acceptable quality brought together chemical engineers and microbiologists to focus on penicillin production. This created the right conditions to start a chain of reactions that lead to the creation of the field of biomolecular engineering. Biomolecular engineering was first defined in 1992 by the U.S. National Institutes of Health as research at the interface of chemical engineering and biology with an emphasis at the molecular level"". Although first defined as research, biomolecular engineering has since become an academic discipline and a field of engineering practice. Herceptin, a humanized Mab for breast cancer treatment, became the first drug designed by a biomolecula" https://en.wikipedia.org/wiki/Datasource,"DataSource is a name given to the connection set up to a database from a server. The name is commonly used when creating a query to the database. The data source name (DSN) need not be the same as the filename for the database. For example, a database file named friends.mdb could be set up with a DSN of school. Then DSN school would be used to refer to the database when performing a query. Sun's version of DataSource A factory for connections to the physical data source that this DataSource object represents. An alternative to the DriverManager facility, a DataSource object is the preferred means of getting a connection. An object that implements the DataSource interface will typically be registered with a naming service based on the Java Naming and Directory Interface (JNDI) API. The DataSource interface is implemented by a driver vendor. There are three types of implementations: Basic implementation — produces a standard Connection object Connection pooling implementation — produces a Connection object that will automatically participate in connection pooling. This implementation works with a middle-tier connection pooling manager. Distributed transaction implementation — produces a Connection object that may be used for distributed transactions and almost always participates in connection pooling. This implementation works with a middle-tier transaction manager and almost always with a connection pooling manager. A DataSource object has properties that can be modified when necessary. For example, if the data source is moved to a different server, the property for the server can be changed. The benefit is that because the data source's properties can be changed, any code accessing that data source does not need to be changed. A driver that is accessed via a DataSource object does not register itself with the DriverManager. Rather, a DataSource object is retrieved through a lookup operation and then used to create a Connection object. With a basic implement" https://en.wikipedia.org/wiki/Mechanical%20calculator,"A mechanical calculator, or calculating machine, is a mechanical device used to perform the basic operations of arithmetic automatically, or (historically) a simulation such as an analog computer or a slide rule. Most mechanical calculators were comparable in size to small desktop computers and have been rendered obsolete by the advent of the electronic calculator and the digital computer. Surviving notes from Wilhelm Schickard in 1623 reveal that he designed and had built the earliest of the modern attempts at mechanizing calculation. His machine was composed of two sets of technologies: first an abacus made of Napier's bones, to simplify multiplications and divisions first described six years earlier in 1617, and for the mechanical part, it had a dialed pedometer to perform additions and subtractions. A study of the surviving notes shows a machine that would have jammed after a few entries on the same dial, and that it could be damaged if a carry had to be propagated over a few digits (like adding 1 to 999). Schickard abandoned his project in 1624 and never mentioned it again until his death 11 years later in 1635. Two decades after Schickard's supposedly failed attempt, in 1642, Blaise Pascal decisively solved these particular problems with his invention of the mechanical calculator. Co-opted into his father's labour as tax collector in Rouen, Pascal designed the calculator to help in the large amount of tedious arithmetic required; it was called Pascal's Calculator or Pascaline. In 1672, Gottfried Leibniz started designing an entirely new machine called the Stepped Reckoner. It used a stepped drum, built by and named after him, the Leibniz wheel, was the first two-motion calculator, the first to use cursors (creating a memory of the first operand) and the first to have a movable carriage. Leibniz built two Stepped Reckoners, one in 1694 and one in 1706. The Leibniz wheel was used in many calculating machines for 200 years, and into the 1970s with the Curta h" https://en.wikipedia.org/wiki/POKEY,"POKEY, an acronym for Pot Keyboard Integrated Circuit, is a digital I/O chip designed by Doug Neubauer at Atari, Inc. for the Atari 8-bit family of home computers. It was first released with the Atari 400 and Atari 800 in 1979 and is included in all later models and the Atari 5200 console. POKEY combines functions for reading paddle controllers (potentiometers) and computer keyboards as well as sound generation and a source for pseudorandom numbers. It produces four voices of distinctive square wave audio, either as clear tones or modified with distortion settings. Neubauer also developed the Atari 8-bit killer application Star Raiders which makes use of POKEY features. POKEY chips are used for audio in many arcade video games of the 1980s including Centipede, Missile Command, Asteroids Deluxe, and Gauntlet. Some of Atari's arcade systems use multi-core versions with 2 or 4 POKEYs in a single package for more audio channels. The Atari 7800 console allows a game cartridge to contain a POKEY, providing better sound than the system's audio chip. Only two licensed games make use of this: the ports of Ballblazer and Commando. The LSI chip has 40 pins and is identified as C012294. The USPTO granted U.S. Patent 4,314,236 to Atari on February 2, 1982 for an ""Apparatus for producing a plurality of audio sound effects"". The inventors listed are Steven T. Mayer and Ronald E. Milner. No longer manufactured, POKEY is emulated in software by arcade and Atari 8-bit emulators and also via the Atari SAP music format and associated player. Features Audio 4 semi-independent audio channels Channels may be configured as one of: Four 8-bit channels Two 16-bit channels One 16-bit channel and two 8-bit channels Per-channel volume, frequency, and waveform (square wave with variable duty cycle or pseudorandom noise) 15 kHz or 64 kHz frequency divider. Two channels may be driven at the CPU clock frequency. High-pass filter Keyboard scan (up to 64 keys) + 2 modifier bits (Shift" https://en.wikipedia.org/wiki/Look-and-say%20sequence,"In mathematics, the look-and-say sequence is the sequence of integers beginning as follows: 1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... . To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example: 1 is read off as ""one 1"" or 11. 11 is read off as ""two 1s"" or 21. 21 is read off as ""one 2, one 1"" or 1211. 1211 is read off as ""one 1, one 2, two 1s"" or 111221. 111221 is read off as ""three 1s, two 2s, one 1"" or 312211. The look-and-say sequence was analyzed by John Conway after he was introduced to it by one of his students at a party. The idea of the look-and-say sequence is similar to that of run-length encoding. If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows: d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, … Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence . (for d = 2, see ) Basic properties Growth The sequence grows indefinitely. In fact, any variant defined by starting with a different integer seed number will (eventually) also grow indefinitely, except for the degenerate sequence: 22, 22, 22, 22, ... Digits presence limitation No digits other than 1, 2, and 3 appear in the sequence, unless the seed number contains such a digit or a run of more than three of the same digit. Cosmological decay Conway's cosmological theorem asserts that every sequence eventually splits (""decays"") into a sequence of ""atomic elements"", which are finite subsequences that never again interact with their neighbors. There are 92 elements containing the digits 1, 2, and 3 only, which John Conway named after the 92 naturally-occurring chemical elements up to uranium, calling the sequence audioactive. There are also two ""transuranic"" elements (Np and Pu) for each digit other t" https://en.wikipedia.org/wiki/Zero%20crossing,"A zero-crossing is a point where the sign of a mathematical function changes (e.g. from positive to negative), represented by an intercept of the axis (zero value) in the graph of the function. It is a commonly used term in electronics, mathematics, acoustics, and image processing. In electronics In alternating current, the zero-crossing is the instantaneous point at which there is no voltage present. In a sine wave or other simple waveform, this normally occurs twice during each cycle. It is a device for detecting the point where the voltage crosses zero in either direction. The zero-crossing is important for systems that send digital data over AC circuits, such as modems, X10 home automation control systems, and Digital Command Control type systems for Lionel and other AC model trains. Counting zero-crossings is also a method used in speech processing to estimate the fundamental frequency of speech. In a system where an amplifier with digitally controlled gain is applied to an input signal, artifacts in the non-zero output signal occur when the gain of the amplifier is abruptly switched between its discrete gain settings. At audio frequencies, such as in modern consumer electronics like digital audio players, these effects are clearly audible, resulting in a 'zipping' sound when rapidly ramping the gain or a soft 'click' when a single gain change is made. Artifacts are disconcerting and clearly not desirable. If changes are made only at zero-crossings of the input signal, then no matter how the amplifier gain setting changes, the output also remains at zero, thereby minimizing the change. (The instantaneous change in gain will still produce distortion, but it will not produce a click.) If electrical power is to be switched, no electrical interference is generated if switched at an instant when there is no current—a zero crossing. Early light dimmers and similar devices generated interference; later versions were designed to switch at the zero crossing. In " https://en.wikipedia.org/wiki/PLL%20multibit,"A PLL multibit or multibit PLL is a phase-locked loop (PLL) which achieves improved performance compared to a unibit PLL by using more bits. Unibit PLLs use only the most significant bit (MSB) of each counter's output bus to measure the phase, while multibit PLLs use more bits. PLLs are an essential component in telecommunications. Multibit PLLs achieve improved efficiency and performance: better utilization of the frequency spectrum, to serve more users at a higher quality of service (QoS), reduced RF transmit power, and reduced power consumption in cellular phones and other wireless devices. Concepts A phase-locked loop is an electronic component or system comprising a closed loop for controlling the phase of an oscillator while comparing it with the phase of an input or reference signal. An indirect frequency synthesizer uses a PLL. In an all-digital PLL, a voltage-controlled oscillator (VCO) is controlled using a digital, rather than analog, control signal. The phase detector gives a signal proportional to the phase difference between two signals; in a PLL, one signal is the reference, and the other is the output of the controlled oscillator (or a divider driven by the oscillator). In a unibit phase-locked loop, the phase is measured using only one bit of the reference and output counters, the most significant bit (MSB). In a multibit phase-locked loop, the phase is measured using more than one bit of the reference and output counters, usually including the most significant bit. Unibit PLL In unibit PLLs, the output frequency is defined by the input frequency and the modulo count of the two counters. In each counter, only the most significant bit (MSB) is used. The other output lines of the counters are ignored; this is wasted information. PLL structure and performance A PLL includes a phase detector, filter and oscillator connected in a closed loop, so the oscillator frequency follows (equals) the input frequency. Although the average output frequency equ" https://en.wikipedia.org/wiki/Abstraction%20%28computer%20science%29,"In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs; the concept of functions or subroutines which represent a specific way of implementing control flow; the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages. Rationale Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's Tenth Rule is an aphorism on how such an architecture is both inevitable and complex. A central form of abstraction in computing is language abstraction: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming languages from the machine language to the assembly language and the high-level language. Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific programming languages. Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software componen" https://en.wikipedia.org/wiki/Forensic%20biology,"Forensic biology is the use of biological principles and techniques in the context of law enforcement investigations. Forensic biology mainly focuses on DNA sequencing of biological matter found at crime scenes. This assists investigators in identifying potential suspects or unidentified bodies. Forensic biology has many sub-branches, such as forensic anthropology, forensic entomology, forensic odontology, forensic pathology, and forensic toxicology. Disciplines History The first known briefings of forensic procedures still used today are recorded as far back as the 7th century through the concept of utilizing fingerprints as a means of identification. By the 7th century, forensic procedures were used to account criminals of guilt charges among other things. Nowadays, the practice of autopsies and forensic investigations has seen a significant surge in both public interest and technological advancements. One of the early pioneers in employing these methods, which would later evolve into the field of forensics, was Alphonse Bertillon, who is also known as the ""father of criminal identification"". In 1879, he introduced a scientific approach to personal identification by developing the science of anthropometry. This method involved a series of body measurements for distinguishing one human individual from another. Karl Landsteiner later made further significant discoveries in forensics. In 1901, he found out that blood could be categorized into different groups: A, B, AB, and O, and thus blood typing was introduced to the world of crime-solving. This development led to further studies and eventually, a whole new spectrum of criminology was added in the fields of medicine and forensics. Dr Leone Lattes, a professor at the Institute of Forensic Medicine in Turin, Italy, has made significant additions into forensics as well. In 1915, he discovered a method to determine the blood group of dried bloodstains, which marked a significant advancement from prior techn" https://en.wikipedia.org/wiki/The%20Equidistribution%20of%20Lattice%20Shapes%20of%20Rings%20of%20Integers%20of%20Cubic%2C%20Quartic%2C%20and%20Quintic%20Number%20Fields,"The Equidistribution of Lattice Shapes of Rings of Integers of Cubic, Quartic, and Quintic Number Fields: An Artist's Rendering is a mathematics book by Piper Harron (also known as Piper H), based on her Princeton University doctoral thesis of the same title. It has been described as ""feminist"", ""unique"", ""honest"", ""generous"", and ""refreshing"". Thesis and reception Harron was advised by Fields Medalist Manjul Bhargava, and her thesis deals with the properties of number fields, specifically the shape of their rings of integers. Harron and Bhargava showed that, viewed as a lattice in real vector space, the ring of integers of a random number field does not have any special symmetries. Rather than simply presenting the proof, Harron intended for the thesis and book to explain both the mathematics and the process (and struggle) that was required to reach this result. The writing is accessible and informal, and the book features sections targeting three different audiences: laypeople, people with general mathematical knowledge, and experts in number theory. Harron intentionally departs from the typical academic format as she is writing for a community of mathematicians who ""do not feel that they are encouraged to be themselves"". Unusually for a mathematics thesis, Harron intersperses her rigorous analysis and proofs with cartoons, poetry, pop-culture references, and humorous diagrams. Science writer Evelyn Lamb, in Scientific American, expresses admiration for Harron for explaining the process behind the mathematics in a way that is accessible to non-mathematicians, especially ""because as a woman of color, she could pay a higher price for doing it."" Mathematician Philp Ording calls her approach to communicating mathematical abstractions ""generous"". Her thesis went viral in late 2015, especially within the mathematical community, in part because of the prologue which begins by stating that ""respected research math is dominated by men of a certain attitude"". Harron had" https://en.wikipedia.org/wiki/List%20of%20American%20Physical%20Society%20prizes%20and%20awards,"The American Physical Society gives out a number of awards for research excellence and conduct; topics include outstanding leadership, computational physics, lasers, mathematics, and more. Prizes David Adler Lectureship Award in the Field of Materials Physics The David Adler Lectureship Award in the Field of Materials Physics is a prize that has been awarded annually by the American Physical Society since 1988. The recipient is chosen for being ""an outstanding contributor to the field of materials physics, who is noted for the quality of his/her research, review articles and lecturing."" The prize is named after physicist David Adler with contributions to the endowment by friends of David Adler and Energy Conversion Devices, Inc. The winner receives a $5,000 honorarium. Will Allis Prize for the Study of Ionized Gases Will Allis Prize for the Study of Ionized Gases is awarded biannually ""for outstanding contributions to understanding the physics of partially ionized plasmas and gases"" in honour of Will Allis. The $10000 prize was founded in 1989 by contributions from AT&T, General Electric, GTE, International Business Machines, and Xerox Corporations. Early Career Award for Soft Matter Research This award recognizes outstanding and sustained contributions by an early-career researcher to the soft matter field. LeRoy Apker Award The LeRoy Apker Award was established in 1978 to recognize outstanding achievements in physics by undergraduate students. Two awards are presented each year, one to a student from a Ph.D. granting institution, and one to a student from a non-Ph.D. granting institution. APS Medal for Exceptional Achievement in Research The APS Medal for Exceptional Achievement in Research was established in 2016 to recognize contributions of the highest level that advance our knowledge and understanding of the physical universe. The medal carries with it a prize of $50,000 and is the largest APS prize to recognize the achievement of researchers from acro" https://en.wikipedia.org/wiki/High%20Precision%20Event%20Timer,"The High Precision Event Timer (HPET) is a hardware timer available in modern x86-compatible personal computers. Compared to older types of timers available in the x86 architecture, HPET allows more efficient processing of highly timing-sensitive applications, such as multimedia playback and OS task switching. It was developed jointly by Intel and Microsoft and has been incorporated in PC chipsets since 2005. Formerly referred to by Intel as a Multimedia Timer, the term HPET was selected to avoid confusion with the software multimedia timers introduced in the MultiMedia Extensions to Windows 3.0. Older operating systems that do not support a hardware HPET device can only use older timing facilities, such as the programmable interval timer (PIT) or the real-time clock (RTC). Windows XP, when fitted with the latest hardware abstraction layer (HAL), can also use the processor's Time Stamp Counter (TSC), or ACPI Power Management Timer (ACPI PMTIMER), together with the RTC to provide operating system features that would, in later Windows versions, be provided by the HPET hardware. Confusingly, such Windows XP systems quote ""HPET"" connectivity in the device driver manager even though the Intel HPET device is not being used. Features An HPET chip consists of a 64-bit up-counter (main counter) counting at a frequency of at least 10 MHz, and a set of (at least three, up to 256) comparators. These comparators are 32- or 64-bit-wide. The HPET is programmed via a memory mapped I/O window that is discoverable via ACPI. The HPET circuit in modern PCs is integrated into the southbridge chip. Each comparator can generate an interrupt when the least significant bits are equal to the corresponding bits of the 64-bit main counter value. The comparators can be put into one-shot mode or periodic mode, with at least one comparator supporting periodic mode and all of them supporting one-shot mode. In one-shot mode the comparator fires an interrupt once when the main counter reaches the" https://en.wikipedia.org/wiki/Supersymmetry,"Supersymmetry is a theoretical framework in physics that suggests the existence of a symmetry between particles with integer spin (bosons) and particles with half-integer spin (fermions). It proposes that for every known particle, there exists a partner particle with different spin properties. This symmetry has not been observed in nature. If confirmed, it could help explain certain phenomena, such as the nature of dark matter and the hierarchy problem in particle physics. A supersymmetric theory is a theory in which the equations for force and the equations for matter are identical. In theoretical and mathematical physics, any theory with this property has the principle of supersymmetry (SUSY). Dozens of supersymmetric theories exist. In theory, supersymmetry is a type of spacetime symmetry between two basic classes of particles: bosons, which have an integer-valued spin and follow Bose–Einstein statistics, and fermions, which have a half-integer-valued spin and follow Fermi–Dirac statistics. In supersymmetry, each particle from the class of fermions would have an associated particle in the class of bosons, and vice versa, known as a superpartner. The spin of a particle's superpartner is different by a half-integer. For example, if the electron exists in a supersymmetric theory, then there would be a particle called a selectron (superpartner electron), a bosonic partner of the electron. In the simplest supersymmetry theories, with perfectly ""unbroken"" supersymmetry, each pair of superpartners would share the same mass and internal quantum numbers besides spin. More complex supersymmetry theories have a spontaneously broken symmetry, allowing superpartners to differ in mass. Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been appli" https://en.wikipedia.org/wiki/Proportionality%20%28mathematics%29,"In mathematics, two sequences of numbers, often experimental data, are proportional or directly proportional if their corresponding elements have a constant ratio. The ratio is called coefficient of proportionality (or proportionality constant) and its reciprocal is known as constant of normalization (or normalizing constant). Two sequences are inversely proportional if corresponding elements have a constant product, also called the coefficient of proportionality. This definition is commonly extended to related varying quantities, which are often called variables. This meaning of variable is not the common meaning of the term in mathematics (see variable (mathematics)); these two different concepts share the same name for historical reasons. Two functions and are proportional if their ratio is a constant function. If several pairs of variables share the same direct proportionality constant, the equation expressing the equality of these ratios is called a proportion, e.g., (for details see Ratio). Proportionality is closely related to linearity. Direct proportionality Given an independent variable x and a dependent variable y, y is directly proportional to x if there is a non-zero constant k such that: The relation is often denoted using the symbols ""∝"" (not to be confused with the Greek letter alpha) or ""~"": (or ) For the proportionality constant can be expressed as the ratio: It is also called the constant of variation or constant of proportionality. A direct proportionality can also be viewed as a linear equation in two variables with a y-intercept of and a slope of k. This corresponds to linear growth. Examples If an object travels at a constant speed, then the distance traveled is directly proportional to the time spent traveling, with the speed being the constant of proportionality. The circumference of a circle is directly proportional to its diameter, with the constant of proportionality equal to . On a map of a sufficiently small" https://en.wikipedia.org/wiki/List%20of%20continuity-related%20mathematical%20topics,"In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways. Continuity of functions and measures Continuous function Absolutely continuous function Absolute continuity of a measure with respect to another measure Continuous probability distribution: Sometimes this term is used to mean a probability distribution whose cumulative distribution function (c.d.f.) is (simply) continuous. Sometimes it has a less inclusive meaning: a distribution whose c.d.f. is absolutely continuous with respect to Lebesgue measure. This less inclusive sense is equivalent to the condition that every set whose Lebesgue measure is 0 has probability 0. Geometric continuity Parametric continuity Continuum Continuum (set theory), the real line or the corresponding cardinal number Linear continuum, any ordered set that shares certain properties of the real line Continuum (topology), a nonempty compact connected metric space (sometimes a Hausdorff space) Continuum hypothesis, a conjecture of Georg Cantor that there is no cardinal number between that of countably infinite sets and the cardinality of the set of all real numbers. The latter cardinality is equal to the cardinality of the set of all subsets of a countably infinite set. Cardinality of the continuum, a cardinal number that represents the size of the set of real numbers See also Continuous variable Mathematical analysis Mathematics-related lists" https://en.wikipedia.org/wiki/Index%20notation,"In mathematics and computer programming, index notation is used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, a vector, or a matrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing a computer program. In mathematics It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can be integers or variables. The array takes the form of tensors in general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases are vectors (1d arrays) and matrices (2d arrays). The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation of tensor operations). See the main article for further details. One-dimensional arrays (vectors) A vector treated as an array of numbers by writing as a row vector or column vector (whichever is used depends on convenience or context): Index notation allows indication of the elements of the array by simply writing ai, where the index i is known to run from 1 to n, because of n-dimensions. For example, given the vector: then some entries are . The notation can be applied to vectors in mathematics and physics. The following vector equation can also be written in terms of the elements of the vector (aka components), that is where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each have n elements, meaning i = 1,2,…n, then the equations are explicitly Hence, index notation serves as an efficient shorthand for representing the general structure to an equation, while applicable to individual components. Two-dimensional arrays More than one index is used to describe arrays of number" https://en.wikipedia.org/wiki/NCR%2053C9x,"The NCR 53C9x is a family of application-specific integrated circuits (ASIC) produced by the former NCR Corporation and others for implementing the SCSI (small computer standard interface) bus protocol in hardware and relieving the host system of the work required to sequence the SCSI bus. The 53C9x was a low-cost solution and was therefore widely adopted by OEMs in various motherboard and peripheral device designs. The original 53C90 lacked direct memory access (DMA) capability, an omission that was addressed in the 53C90A and subsequent versions. The 53C90(A) and later 53C94 supported the ANSI X3.l3I-I986 SCSI-1 protocol, implementing the eight bit parallel SCSI bus and eight bit host data bus transfers. The 53CF94 and 53CF96 added SCSI-2 support and implemented larger transfer sizes per SCSI transaction. Additionally, the 53CF96 could be interfaced to a single-ended bus or a high voltage differential (HVD) bus, the latter which supported long bus cables. All members of the 53C94/96 type support both eight and 16 bit host bus transfers via programmed input/output (PIO) and DMA. QLogic FAS216 and Emulex ESP100 chips are a drop-in replacement for the NCR 53C94. The 53C90A and 53C(F)94/96 were also produced under license by Advanced Micro Devices (AMD). A list of systems which included the 53C9x controller includes: 53C94 Sun Microsystems SPARCstations and the SPARCclassic DEC 3000 AXP DECstations and the PMAZ-A TURBOchannel card VAXstation model 60, 4000-m90 MIPS Magnum Power Macintosh G3; often used as a secondary SCSI controller with MESH (Macintosh Enhanced SCSI Hardware) as the primary MacroSystem's Evolution family for Amiga (FAS216) 53C96 Macintosh Quadra 650 Macintosh LC475/Quadra 605/Performa 475 Macintosh Quadra 900 and 950 See also NCR 5380" https://en.wikipedia.org/wiki/Network%20behavior%20anomaly%20detection,"Network behavior anomaly detection (NBAD) is a security technique that provides network security threat detection. It is a complementary technology to systems that detect security threats based on packet signatures. NBAD is the continuous monitoring of a network for unusual events or trends. NBAD is an integral part of network behavior analysis (NBA), which offers security in addition to that provided by traditional anti-threat applications such as firewalls, intrusion detection systems, antivirus software and spyware-detection software. Description Most security monitoring systems utilize a signature-based approach to detect threats. They generally monitor packets on the network and look for patterns in the packets which match their database of signatures representing pre-identified known security threats. NBAD-based systems are particularly helpful in detecting security threat vectors in two instances where signature-based systems cannot: (i) new zero-day attacks, and (ii) when the threat traffic is encrypted such as the command and control channel for certain Botnets. An NBAD program tracks critical network characteristics in real time and generates an alarm if a strange event or trend is detected that could indicate the presence of a threat. Large-scale examples of such characteristics include traffic volume, bandwidth use and protocol use. NBAD solutions can also monitor the behavior of individual network subscribers. In order for NBAD to be optimally effective, a baseline of normal network or user behavior must be established over a period of time. Once certain parameters have been defined as normal, any departure from one or more of them is flagged as anomalous. NBAD technology/techniques are applied in a number of network and security monitoring domains including: (i) Log analysis (ii) Packet inspection systems (iii) Flow monitoring systems and (iv) Route analytics. NBAD has also been described as outlier detection, novelty detection, deviation detecti" https://en.wikipedia.org/wiki/Biomagnetics,"Biomagnetics is a field of biotechnology. It has actively been researched since at least 2004. Although the majority of structures found in living organisms are diamagnetic, the magnetic field itself, as well as magnetic nanoparticles, microstructures and paramagnetic molecules can influence specific physiological functions of organisms under certain conditions. The effect of magnetic fields on biosystems is a topic of research that falls under the biomagnetic umbrella, as well as the construction of magnetic structures or systems that are either biocompatible, biodegradable or biomimetic. Magnetic nanoparticles and magnetic microparticles are known to interact with certain prokaryotes and certain eukaryotes. Magnetic nanoparticles under the influence of magnetic and electromagnetic fields were shown to modulate redox reactions for the inhibition or the promotion of animal tumor growth. The mechanism underlying nanomagnetic modulation involves the convergence of magnetochemical and magneto-mechanical reactions. History In 2014, biotechnicians at Monash University noticed that ""the efficiency of delivery of DNA vaccines is often relatively low compared to protein vaccines"" and on this basis suggested the use of superparamagnetic iron oxide nanoparticles (SPIONs) to deliver genetic materials via magnetofection because it increases the efficiency of drug delivery. As of 2021, interactions have been studied between low cost iron oxide nanoparticles (IONPs) and the main groups of biomolecules: proteins, lipids, nucleic acids and carbohydrates. There have been suggestions of magnetically-targeted drug delivery systems, in particular for the cationic peptide lasioglossin. Around May 2021 rumours abounded that certain mRNA biotech delivery systems were magnetically active. Prompted by state-owned broadcaster France24, :fr:Julien Bobroff who specialises in magnetism and teaches at the University of Paris-Saclay debunked the claims of Covid-19 conspiracy theorists using" https://en.wikipedia.org/wiki/Stochastic,"Stochastic (; ) refers to the property of being well-described by a random probability distribution. Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselves, these two terms are often used synonymously. Furthermore, in probability theory, the formal concept of a stochastic process is also referred to as a random process. Stochasticity is used in many different fields, including the natural sciences such as biology, chemistry, ecology, neuroscience, and physics, as well as technology and engineering fields such as image processing, signal processing, information theory, computer science, cryptography, and telecommunications. It is also used in finance, due to seemingly random changes in financial markets as well as in medicine, linguistics, music, media, colour theory, botany, manufacturing, and geomorphology. Etymology The word stochastic in English was originally used as an adjective with the definition ""pertaining to conjecturing"", and stemming from a Greek word meaning ""to aim at a mark, guess"", and the Oxford English Dictionary gives the year 1662 as its earliest occurrence. In his work on probability Ars Conjectandi, originally published in Latin in 1713, Jakob Bernoulli used the phrase ""Ars Conjectandi sive Stochastice"", which has been translated to ""the art of conjecturing or stochastics"". This phrase was used, with reference to Bernoulli, by Ladislaus Bortkiewicz, who in 1917 wrote in German the word Stochastik with a sense meaning random. The term stochastic process first appeared in English in a 1934 paper by Joseph Doob. For the term and a specific mathematical definition, Doob cited another 1934 paper, where the term stochastischer Prozeß was used in German by Aleksandr Khinchin, though the German term had been used earlier in 1931 by Andrey Kolmogorov. Mathematics In the early 1930s, Aleksandr Khinchin gave the first mathematical definition of a stochas" https://en.wikipedia.org/wiki/SAMV%20%28algorithm%29,"SAMV (iterative sparse asymptotic minimum variance) is a parameter-free superresolution algorithm for the linear inverse problem in spectral estimation, direction-of-arrival (DOA) estimation and tomographic reconstruction with applications in signal processing, medical imaging and remote sensing. The name was coined in 2013 to emphasize its basis on the asymptotically minimum variance (AMV) criterion. It is a powerful tool for the recovery of both the amplitude and frequency characteristics of multiple highly correlated sources in challenging environments (e.g., limited number of snapshots and low signal-to-noise ratio). Applications include synthetic-aperture radar, computed tomography scan, and magnetic resonance imaging (MRI). Definition The formulation of the SAMV algorithm is given as an inverse problem in the context of DOA estimation. Suppose an -element uniform linear array (ULA) receive narrow band signals emitted from sources located at locations , respectively. The sensors in the ULA accumulates snapshots over a specific time. The dimensional snapshot vectors are where is the steering matrix, contains the source waveforms, and is the noise term. Assume that , where is the Dirac delta and it equals to 1 only if and 0 otherwise. Also assume that and are independent, and that , where . Let be a vector containing the unknown signal powers and noise variance, . The covariance matrix of that contains all information about is This covariance matrix can be traditionally estimated by the sample covariance matrix where . After applying the vectorization operator to the matrix , the obtained vector is linearly related to the unknown parameter as , where , , , , and let where is the Kronecker product. SAMV algorithm To estimate the parameter from the statistic , we develop a series of iterative SAMV approaches based on the asymptotically minimum variance criterion. From, the covariance matrix of an arbitrary consistent estimator o" https://en.wikipedia.org/wiki/Minimum-Pairs%20Protocol,"The minimum-pairs (or MP) is an active measurement protocol to estimate in real-time the smaller of the forward and reverse one-way network delays (OWDs). It is designed to work in hostile environments, where a set of three network nodes can estimate an upper-bound OWD between themselves and a fourth untrusted node. All four nodes must cooperate, though honest cooperation from the fourth node is not required. The objective is to conduct such estimates without involving the untrusted nodes in clock synchronization, and in a manner more accurate than simply half the round-trip time (RTT). The MP protocol can be used in delay-sensitive applications (such as placing content delivery network replicas) or for secure Internet geolocation. Methodology The MP protocol requires the three trusted network nodes to synchronize their clocks, and securely have access to their public keys, which could be achieved through a closed public key infrastructure (PKI) system. The untrusted node need not follow suit because it is not assumed to cooperate honestly. To estimate an upper bound to the smaller of the forward and reverse OWD between node A and the untrusted node X (see figure for notation), X first establishes an application-layer connection to all three nodes. This could be done transparently over the browser using, e.g., WebSockets. The three nodes then take turns in exchanging digitally-signed timestamps. Assuming node A begins, it sends a signed timestamp to X. Node X forwards that message to the other two nodes. When the message is received, its receiving time is recorded. The receiving node then verifies the signature, and calculates the time it took the message to traverse the network from its originator to the recipient passing by the untrusted node. This is done by subtracting the timestamp in the message from the receiving time. Node B then repeats the process, followed by node C. After all three nodes have taken turns, they end-up with six delay estimates corresp" https://en.wikipedia.org/wiki/Contributors%20to%20the%20mathematical%20background%20for%20general%20relativity,"This is a list of contributors to the mathematical background for general relativity. For ease of readability, the contributions (in brackets) are unlinked but can be found in the contributors' article. B Luigi Bianchi (Bianchi identities, Bianchi groups, differential geometry) C Élie Cartan (curvature computation, early extensions of GTR, Cartan geometries) Elwin Bruno Christoffel (connections, tensor calculus, Riemannian geometry) Clarissa-Marie Claudel (Geometry of photon surfaces) D Tevian Dray (The Geometry of General Relativity) E Luther P. Eisenhart (semi-Riemannian geometries) Frank B. Estabrook (Wahlquist-Estabrook approach to solving PDEs; see also parent list) Leonhard Euler (Euler-Lagrange equation, from which the geodesic equation is obtained) G Carl Friedrich Gauss (curvature, theory of surfaces, intrinsic vs. extrinsic) K Martin Kruskal (inverse scattering transform; see also parent list) L Joseph Louis Lagrange (Lagrangian mechanics, Euler-Lagrange equation) Tullio Levi-Civita (tensor calculus, Riemannian geometry; see also parent list) André Lichnerowicz (tensor calculus, transformation groups) M Alexander Macfarlane (space analysis and Algebra of Physics) Jerrold E. Marsden (linear stability) N Isaac Newton (Newton's identities for characteristic of Einstein tensor) R Gregorio Ricci-Curbastro (Ricci tensor, differential geometry) Georg Bernhard Riemann (Riemannian geometry, Riemann curvature tensor) S Richard Schoen (Yamabe problem; see also parent list) Corrado Segre (Segre classification) W Hugo D. Wahlquist (Wahlquist-Estabrook algorithm; see also parent list) Hermann Weyl (Weyl tensor, gauge theories; see also parent list) Eugene P. Wigner (stabilizers in Lorentz group) See also Contributors to differential geometry Contributors to general relativity Physics-related lists" https://en.wikipedia.org/wiki/Transport%20of%20structure,"In mathematics, particularly in universal algebra and category theory, transport of structure refers to the process whereby a mathematical object acquires a new structure and its canonical definitions, as a result of being isomorphic to (or otherwise identified with) another object with a pre-existing structure. Definitions by transport of structure are regarded as canonical. Since mathematical structures are often defined in reference to an underlying space, many examples of transport of structure involve spaces and mappings between them. For example, if and are vector spaces with being an inner product on , such that there is an isomorphism from to , then one can define an inner product on by the following rule: Although the equation makes sense even when is not an isomorphism, it only defines an inner product on when is, since otherwise it will cause to be degenerate. The idea is that allows one to consider and as ""the same"" vector space, and by following this analogy, then one can transport an inner product from one space to the other. A more elaborated example comes from differential topology, in which the notion of smooth manifold is involved: if is such a manifold, and if is any topological space which is homeomorphic to , then one can consider as a smooth manifold as well. That is, given a homeomorphism , one can define coordinate charts on by ""pulling back"" coordinate charts on through . Recall that a coordinate chart on is an open set together with an injective map for some natural number ; to get such a chart on , one uses the following rules: and . Furthermore, it is required that the charts cover (the fact that the transported charts cover follows immediately from the fact that is a bijection). Since is a smooth manifold, if U and V, with their maps and , are two charts on , then the composition, the ""transition map"" (a self-map of ) is smooth. To verify this for the transported charts on , notice that , and there" https://en.wikipedia.org/wiki/Directory-based%20cache%20coherence,"In computer engineering, directory-based cache coherence is a type of cache coherence mechanism, where directories are used to manage caches in place of bus snooping. Bus snooping methods scale poorly due to the use of broadcasting. These methods can be used to target both performance and scalability of directory systems. Full bit vector format In the full bit vector format, for each possible cache line in memory, a bit is used to track whether every individual processor has that line stored in its cache. The full bit vector format is the simplest structure to implement, but the least scalable. The SGI Origin 2000 uses a combination of full bit vector and coarse bit vector depending on the number of processors. Each directory entry must have 1 bit stored per processor per cache line, along with bits for tracking the state of the directory. This leads to the total size required being (number of processors)×number of cache lines, having a storage overhead ratio of (number of processors)/(cache block size×8). It can be observed that directory overhead scales linearly with the number of processors. While this may be fine for a small number of processors, when implemented in large systems the size requirements for the directory becomes excessive. For example, with a block size of 32 bytes and 1024 processors, the storage overhead ratio becomes 1024/(32×8) = 400%. Coarse bit vector format The coarse bit vector format has a similar structure to the full bit vector format, though rather than tracking one bit per processor for every cache line, the directory groups several processors into nodes, storing whether a cache line is stored in a node rather than a line. This improves size requirements at the expense of bus traffic saving (processors per node)×(total lines) bits of space. Thus the ratio overhead is the same, just replacing number of processors with number of processor groups. When a bus request is made for a cache line that one processor in the group has, th" https://en.wikipedia.org/wiki/Gelfond%27s%20constant,"In mathematics, Gelfond's constant, named after Aleksandr Gelfond, is , that is, raised to the power . Like both and , this constant is a transcendental number. This was first established by Gelfond and may now be considered as an application of the Gelfond–Schneider theorem, noting that where is the imaginary unit. Since is algebraic but not rational, is transcendental. The constant was mentioned in Hilbert's seventh problem. A related constant is , known as the Gelfond–Schneider constant. The related value  +  is also irrational. Numerical value The decimal expansion of Gelfond's constant begins ... Construction If one defines and for , then the sequence converges rapidly to . Continued fraction expansion This is based on the digits for the simple continued fraction: As given by the integer sequence A058287. Geometric property The volume of the n-dimensional ball (or n-ball), is given by where is its radius, and is the gamma function. Any even-dimensional ball has volume and, summing up all the unit-ball () volumes of even-dimension gives Similar or related constants Ramanujan's constant This is known as Ramanujan's constant. It is an application of Heegner numbers, where 163 is the Heegner number in question. Similar to , is very close to an integer: ... This number was discovered in 1859 by the mathematician Charles Hermite. In a 1975 April Fool article in Scientific American magazine, ""Mathematical Games"" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name. The coincidental closeness, to within 0.000 000 000 000 75 of the number is explained by complex multiplication and the q-expansion of the j-invariant, specifically: and, where is the error term, which explains why is 0.000 000 000 000 75 below . (For more detail on this proof, consult the article on Heegner numbers.) The number The decimal " https://en.wikipedia.org/wiki/Thermal%20energy,"The term ""thermal energy"" is used loosely in various contexts in physics and engineering, generally related to the kinetic energy of vibrating and colliding atoms in a substance. It can refer to several different well-defined physical concepts. These include the internal energy or enthalpy of a body of matter and radiation; heat, defined as a type of energy transfer (as is thermodynamic work); and the characteristic energy of a degree of freedom, , in a system that is described in terms of its microscopic particulate constituents (where denotes temperature and denotes the Boltzmann constant). Relation to heat and internal energy In thermodynamics, heat is energy transferred to or from a thermodynamic system by mechanisms other than thermodynamic work or transfer of matter, such as conduction, radiation, and friction. Heat refers to a quantity transferred between systems, not to a property of any one system, or ""contained"" within it. On the other hand, internal energy and enthalpy are properties of a single system. Heat and work depend on the way in which an energy transfer occurred, whereas internal energy is a property of the state of a system and can thus be understood without knowing how the energy got there. Macroscopic thermal energy The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that ""the converted chemical potential energy has simply become internal energy"". It is, however, convenient and more lucid to say that ""the chemical potential energy has been converted into thermal energy"". Such thermal energy may be viewed as a contributor to internal energy or to enthalpy, thinking of the contribution as a process without thinking that the contributed energy has become an identifiable component o" https://en.wikipedia.org/wiki/List%20of%20transforms,"This is a list of transforms in mathematics. Integral transforms Abel transform Bateman transform Fourier transform Short-time Fourier transform Gabor transform Hankel transform Hartley transform Hermite transform Hilbert transform Hilbert–Schmidt integral operator Jacobi transform Laguerre transform Laplace transform Inverse Laplace transform Two-sided Laplace transform Inverse two-sided Laplace transform Laplace–Carson transform Laplace–Stieltjes transform Legendre transform Linear canonical transform Mellin transform Inverse Mellin transform Poisson–Mellin–Newton cycle N-transform Radon transform Stieltjes transformation Sumudu transform Wavelet transform (integral) Weierstrass transform Hussein Jassim Transform Discrete transforms Binomial transform Discrete Fourier transform, DFT Fast Fourier transform, a popular implementation of the DFT Discrete cosine transform Modified discrete cosine transform Discrete Hartley transform Discrete sine transform Discrete wavelet transform Hadamard transform (or, Walsh–Hadamard transform) Fast wavelet transform Hankel transform, the determinant of the Hankel matrix Discrete Chebyshev transform Equivalent, up to a diagonal scaling, to a discrete cosine transform Finite Legendre transform Spherical Harmonic transform Irrational base discrete weighted transform Number-theoretic transform Stirling transform Discrete-time transforms These transforms have a continuous frequency domain: Discrete-time Fourier transform Z-transform Data-dependent transforms Karhunen–Loève transform Other transforms Affine transformation (computer graphics) Bäcklund transform Bilinear transform Box–Muller transform Burrows–Wheeler transform (data compression) Chirplet transform Distance transform Fractal transform Gelfand transform Hadamard transform Hough transform (digital image processing) Inverse scattering transform Legendre transformation Möbius transformation Perspective transform (computer graphics) Sequence transform Watershed transform (" https://en.wikipedia.org/wiki/Current%E2%80%93voltage%20characteristic,"A current–voltage characteristic or I–V curve (current–voltage curve) is a relationship, typically represented as a chart or graph, between the electric current through a circuit, device, or material, and the corresponding voltage, or potential difference, across it. In electronics In electronics, the relationship between the direct current (DC) through an electronic device and the DC voltage across its terminals is called a current–voltage characteristic of the device. Electronic engineers use these charts to determine basic parameters of a device and to model its behavior in an electrical circuit. These characteristics are also known as I–V curves, referring to the standard symbols for current and voltage. In electronic components with more than two terminals, such as vacuum tubes and transistors, the current–voltage relationship at one pair of terminals may depend on the current or voltage on a third terminal. This is usually displayed on a more complex current–voltage graph with multiple curves, each one representing the current–voltage relationship at a different value of current or voltage on the third terminal. For example the diagram at right shows a family of I–V curves for a MOSFET as a function of drain voltage with overvoltage (VGS − Vth) as a parameter. The simplest I–V curve is that of a resistor, which according to Ohm's law exhibits a linear relationship between the applied voltage and the resulting electric current; the current is proportional to the voltage, so the I–V curve is a straight line through the origin with positive slope. The reciprocal of the slope is equal to the resistance. The I–V curve of an electrical component can be measured with an instrument called a curve tracer. The transconductance and Early voltage of a transistor are examples of parameters traditionally measured from the device's I–V curve. Types of I–V curves The shape of an electrical component's characteristic curve reveals much about its operating properti" https://en.wikipedia.org/wiki/Superellipsoid,"In mathematics, a superellipsoid (or super-ellipsoid) is a solid whose horizontal sections are superellipses (Lamé curves) with the same squareness parameter , and whose vertical sections through the center are superellipses with the squareness parameter . It is a generalization of an ellipsoid, which is a special case when . Superellipsoids as computer graphics primitives were popularized by Alan H. Barr (who used the name ""superquadrics"" to refer to both superellipsoids and supertoroids). In modern computer vision and robotics literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. Superellipsoids have an rich shape vocabulary, including cuboids, cylinders, ellipsoids, octahedra and their intermediates. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation. The main advantage of describing objects and envirionment with superellipsoids is its conciseness and expressiveness in shape. Furthermore, a closed-form expression of the Minkowski sum between two superellipsoids is available. This makes it a desirable geometric primitive for robot grasping, collision detection, and motion planning. Useful tools and algorithms for superquadric visualization, sampling, and recovery are open-sourced here. Special cases A handful of notable mathematical figures can arise as special cases of superellipsoids given the correct set of values, which are depicted in the above graphic: Cylinder Sphere Steinmetz solid Bicone Regular octahedron Cube, as a limiting case where the exponents tend to infinity Piet Hein's supereggs are also special cases of superellipsoids. Formulas Basic (normalized) superellipsoid The basic superellipsoid is defined by the implicit function The parameters and are positive real numbers that control the squareness of the shape. The surface of the superellipsoid is de" https://en.wikipedia.org/wiki/Antenna%20effect,"The antenna effect, more formally plasma induced gate oxide damage, is an effect that can potentially cause yield and reliability problems during the manufacture of MOS integrated circuits. Factories (fabs) normally supply antenna rules, which are rules that must be obeyed to avoid this problem. A violation of such rules is called an antenna violation. The word antenna is something of a misnomer in this context—the problem is really the collection of charge, not the normal meaning of antenna, which is a device for converting electromagnetic fields to/from electrical currents. Occasionally the phrase antenna effect is used in this context, but this is less common since there are many effects, and the phrase does not make clear which is meant. Figure 1(a) shows a side view of a typical net in an integrated circuit. Each net will include at least one driver, which must contain a source or drain diffusion (in newer technology implantation is used), and at least one receiver, which will consist of a gate electrode over a thin gate dielectric (see Figure 2 for a detailed view of a MOS transistor). Since the gate dielectric is so thin, only a few molecules thick, a big worry is breakdown of this layer. This can happen if the net somehow acquires a voltage somewhat higher than the normal operating voltage of the chip. (Historically, the gate dielectric has been silicon dioxide, so most of the literature refers to gate oxide damage or gate oxide breakdown. As of 2007, some manufacturers are replacing this oxide with various high-κ dielectric materials which may or may not be oxides, but the effect is still the same.) Once the chip is fabricated, this cannot happen, since every net has at least some source/drain implant connected to it. The source/drain implant forms a diode, which breaks down at a lower voltage than the oxide (either forward diode conduction, or reverse breakdown), and does so non-destructively. This protects the gate oxide. However, during th" https://en.wikipedia.org/wiki/Keepalive,"A keepalive (KA) is a message sent by one device to another to check that the link between the two is operating, or to prevent the link from being broken. Description Once a TCP connection has been established, that connection is defined to be valid until one side closes it. Once the connection has entered the connected state, it will remain connected indefinitely. But, in reality, the connection will not last indefinitely. Many firewall or NAT systems will close a connection if there has been no activity in some time period. The Keep Alive signal can be used to trick intermediate hosts to not close the connection due to inactivity. It is also possible that one host is no longer listening (e.g. application or system crash). In this case, the connection is closed, but no FIN was ever sent. In this case, a KeepAlive packet can be used to interrogate a connection to check if it is still intact. A keepalive signal is often sent at predefined intervals, and plays an important role on the Internet. After a signal is sent, if no reply is received, the link is assumed to be down and future data will be routed via another path until the link is up again. A keepalive signal can also be used to indicate to Internet infrastructure that the connection should be preserved. Without a keepalive signal, intermediate NAT-enabled routers can drop the connection after timeout. Since the only purpose is to find links that do not work or to indicate connections that should be preserved, keepalive messages tend to be short and not take much bandwidth. However, their precise format and usage terms depend on the communication protocol. TCP keepalive Transmission Control Protocol (TCP) keepalives are an optional feature, and if included must default to off. The keepalive packet contains no data. In an Ethernet network, this results in frames of minimum size (64 bytes). There are three parameters related to keepalive: Keepalive time is the duration between two keepalive transmissions in" https://en.wikipedia.org/wiki/Gradient%20pattern%20analysis,"Gradient pattern analysis (GPA) is a geometric computing method for characterizing geometrical bilateral symmetry breaking of an ensemble of symmetric vectors regularly distributed in a square lattice. Usually, the lattice of vectors represent the first-order gradient of a scalar field, here an M x M square amplitude matrix. An important property of the gradient representation is the following: A given M x M matrix where all amplitudes are different results in an M x M gradient lattice containing asymmetric vectors. As each vector can be characterized by its norm and phase, variations in the amplitudes can modify the respective gradient pattern. The original concept of GPA was introduced by Rosa, Sharma and Valdivia in 1999. Usually GPA is applied for spatio-temporal pattern analysis in physics and environmental sciences operating on time-series and digital images. Calculation By connecting all vectors using a Delaunay triangulation criterion it is possible to characterize gradient asymmetries computing the so-called gradient asymmetry coefficient, that has been defined as: , where is the total number of asymmetric vectors, is the number of Delaunay connections among them and the property is valid for any gradient square lattice. As the asymmetry coefficient is very sensitive to small changes in the phase and modulus of each gradient vector, it can distinguish complex variability patterns (bilateral asymmetry) even when they are very similar but consist of a very fine structural difference. Note that, unlike most of the statistical tools, the GPA does not rely on the statistical properties of the data but depends solely on the local symmetry properties of the correspondent gradient pattern. For a complex extended pattern (matrix of amplitudes of a spatio-temporal pattern) composed by locally asymmetric fluctuations, is nonzero, defining different classes of irregular fluctuation patterns (1/f noise, chaotic, reactive-diffusive, etc.). Besides o" https://en.wikipedia.org/wiki/Notation%20for%20differentiation,"In differential calculus, there is no single uniform notation for differentiation. Instead, various notations for the derivative of a function or variable have been proposed by various mathematicians. The usefulness of each notation varies with the context, and it is sometimes advantageous to use more than one notation in a given context. The most common notations for differentiation (and its opposite operation, the antidifferentiation or indefinite integration) are listed below. Leibniz's notation The original notation employed by Gottfried Leibniz is used throughout mathematics. It is particularly common when the equation is regarded as a functional relationship between dependent and independent variables and . Leibniz's notation makes this relationship explicit by writing the derivative as Furthermore, the derivative of at is therefore written Higher derivatives are written as This is a suggestive notational device that comes from formal manipulations of symbols, as in, The value of the derivative of at a point may be expressed in two ways using Leibniz's notation: . Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially helpful when considering partial derivatives. It also makes the chain rule easy to remember and recognize: Leibniz's notation for differentiation does not require assigning a meaning to symbols such as or (known as differentials) on their own, and some authors do not attempt to assign these symbols meaning. Leibniz treated these symbols as infinitesimals. Later authors have assigned them other meanings, such as infinitesimals in non-standard analysis, or exterior derivatives. Commonly, is left undefined or equated with , while is assigned a meaning in terms of , via the equation which may also be written, e.g. (see below). Such equations give rise to the terminology found in some texts wherein the derivative is referred to as the ""differential coefficie" https://en.wikipedia.org/wiki/Sinc%20function,"In mathematics, physics and engineering, the sinc function, denoted by , has two forms, normalized and unnormalized. In mathematics, the historical unnormalized sinc function is defined for by Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x). In digital signal processing and information theory, the normalized sinc function is commonly defined for by In either case, the value at is defined to be the limiting value for all real (the limit can be proven using the squeeze theorem). The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of ). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of . The normalized sinc function is the Fourier transform of the rectangular function with no scaling. It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal. The only difference between the two definitions is in the scaling of the independent variable (the axis) by a factor of . In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1. The sinc function is then analytic everywhere and hence an entire function. The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article ""Information theory and inverse probability in telecommunication"", in which he said that the function ""occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own"", and his 1953 book Probability and Information Theory, with Applications to Radar. The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's Formula) for the zeroth-order spherical Bessel function of the first " https://en.wikipedia.org/wiki/List%20of%20types%20of%20numbers,"Numbers can be classified according to how they are represented or according to the properties that they have. Main types Natural numbers (): The counting numbers {1, 2, 3, ...} are commonly called natural numbers; however, other definitions include 0, so that the non-negative integers {0, 1, 2, 3, ...} are also called natural numbers. Natural numbers including 0 are also sometimes called whole numbers. Integers (): Positive and negative counting numbers, as well as zero: {..., −3, −2, −1, 0, 1, 2, 3, ...}. Rational numbers (): Numbers that can be expressed as a ratio of an integer to a non-zero integer. All integers are rational, but there are rational numbers that are not integers, such as . Real numbers (): Numbers that correspond to points along a line. They can be positive, negative, or zero. All rational numbers are real, but the converse is not true. Irrational numbers: Real numbers that are not rational. Imaginary numbers: Numbers that equal the product of a real number and the square root of −1. The number 0 is both real and purely imaginary. Complex numbers (): Includes real numbers, imaginary numbers, and sums and differences of real and imaginary numbers. Hypercomplex numbers include various number-system extensions: quaternions (), octonions (), and other less common variants. -adic numbers: Various number systems constructed using limits of rational numbers, according to notions of ""limit"" different from the one used to construct the real numbers. Number representations Decimal: The standard Hindu–Arabic numeral system using base ten. Binary: The base-two numeral system used by computers, with digits 0 and 1. Ternary: The base-three numeral system with 0, 1, and 2 as digits. Quaternary: The base-four numeral system with 0, 1, 2, and 3 as digits. Hexadecimal: Base 16, widely used by computer system designers and programmers, as it provides a more human-friendly representation of binary-coded values. Octal: Base 8, occasionally used b" https://en.wikipedia.org/wiki/Embedded%20HTTP%20server,"An embedded HTTP server is an HTTP server used in an embedded system. The HTTP server is usually implemented as a software component of an application (embedded) system that controls and/or monitors a machine with mechanical and/or electrical parts. The HTTP server implements the HTTP protocol in order to allow communications with one or more local or remote users using a browser. The aim is to let users to interact with information provided by the embedded system (user interface, data monitoring, data logging, data configuration, etc.) via network, without using traditional peripherals required for local user interfaces (display, keyboard, etc.). In some cases the functionalities provided via HTTP server allow also program-to-program communications, e.g. to retrieve data logged about the monitored machine, etc. Usages Examples of usage within an embedded application might be (e.g.): to provide a thin client interface for a traditional application; to provide indexing, reporting, and debugging tools during the development stage; to implement a protocol for the distribution and acquisition of information to be displayed in the regular interface — possibly a web service, and possibly using XML as the data format; to develop a web application. Advantages There are a few advantages to using HTTP to perform the above: HTTP is a well studied cross-platform protocol and there are mature implementations freely available; HTTP is seldom blocked by firewalls and intranet routers; HTTP clients (e.g. web browsers) are readily available with all modern computers; there is a growing tendency of using embedded HTTP servers in applications that parallels the rising trends of home-networking and ubiquitous computing. Typical requirements Natural limitations of the platforms where an embedded HTTP server runs contribute to the list of the non-functional requirements of the embedded, or more precise, embeddable HTTP server. Some of these requirements are the followin" https://en.wikipedia.org/wiki/Reliable%20Data%20Transfer,"Reliable Data Transfer is a topic in computer networking concerning the transfer of data across unreliable channels. Unreliability is one of the drawbacks of packet switched networks such as the modern internet, as packet loss can occur for a variety of reasons, and delivery of packets is not guaranteed to happen in the order that the packets were sent. Therefore, in order to create long-term data streams over the internet, techniques have been developed to provide reliability, which are generally implemented in the Transport layer of the internet protocol suite. In instructional materials, the topic is often presented in the form of theoretical example protocols which are themselves referred to as ""RDT"", in order to introduce students to the problems and solutions encountered in Transport layer protocols such as the Transmission Control Protocol. These sources often describe a pseudo-API and include Finite-state machine diagrams to illustrate how such a protocol might be implemented, as well as a version history. These details are generally consistent between sources, yet are often left uncited, so the origin of this theoretical RDT protocol is unclear. Example Versions Sources that describe an example RDT protocol often provide a ""version history"" to illustrate the development of modern Transport layer techniques, generally resembling the below: Reliable Data Transfer 1.0 With Reliable Data Transfer 1.0, the data can only be transferred via a reliable data channel. It is the most simple of the Reliable Data Transfer protocols in terms of algorithm processing. Reliable Data Transfer 2.0 Reliable Data Transfer 2.0 supports reliable data transfer in unreliable data channels. It uses a checksum to detect errors. The receiver sends acknowledgement message if the message is complete, and if the message is incomplete, it sends a negative acknowledgement message and requests the data again. Reliable Data Transfer 2.1 Reliable Data Transfer 2.1 also suppor" https://en.wikipedia.org/wiki/Overlap%E2%80%93add%20method,"In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response (FIR) filter : where for m outside the region . This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of h[n] with short segments of : where L is an arbitrary segment length. Then: and y[n] can be written as a sum of short convolutions: where the linear convolution is zero outside the region . And for any parameter it is equivalent to the N-point circular convolution of with in the .  The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. Pseudocode The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end Efficiency considerations When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces output samples, so the number of compl" https://en.wikipedia.org/wiki/E-SCREEN,"E-SCREEN is a cell proliferation assay based on the enhanced proliferation of human breast cancer cells (MCF-7) in the presence of estrogen active substances. The E-SCREEN test is a tool to easily and rapidly assess estrogenic activity of suspected xenoestrogens (singly or in combination). This bioassay measures estrogen-induced increase of the number of human breast cancer cell, which is biologically equivalent to the increase of mitotic activity in tissues of the genital tract. It was originally developed by Soto et al. and was included in the first version of the OECD Conceptual Framework for Testing and Assessment of Endocrine Disrupters published in 2012. However, due to failed validation, it was not included in the updated version of the framework published in 2018. The E-SCREEN test The E-SCREEN cell proliferation assay is performed with the human MCF-7 breast cancer cell line, an established estrogenic cell line that endogenously expresses ERα. Human MCF-7 are cultivated in Dulbecco’s modified Eagle’s medium (DMEM) with fetal bovine serum (FBS) and phenol red as buffer tracer (culture medium), at 37 °C, in an atmosphere of 5% CO₂ and 95% air under saturating humidity. To accomplish the E-SCREEN assay the cells are trypsinized and plated in well culture plates. Cells are allowed to attach for 24 h, and the 4 seeding medium is then removed and replaced with the experimental culture medium (phenol red free DMEM with charcoal dextran treated fetal bovine serum -steroid-free-). For assaying suspected estrogen active substances, a range of concentrations of the test compound is added to the experimental medium. In each experiment, the cells are exposed to a dilution series of 17β-estradiol (0.1 pM–1000 pM) for providing a positive control (standard dose-response curve), and treated only with hormone-free medium as a negative control. The bioassay ends on day 6 (late exponential phase) by removing the media from the wells and fixing the cells with trichloroace" https://en.wikipedia.org/wiki/Operating%20system%20Wi-Fi%20support,"Operating system Wi-Fi support is the support in the operating system for Wi-Fi and usually consists of two pieces: driver level support, and configuration and management support. Driver support is usually provided by multiple manufacturers of the chip set hardware or end manufacturers. Also available are Unix clones such as Linux, sometimes through open source projects. Configuration and management support consists of software to enumerate, join, and check the status of available Wi-Fi networks. This also includes support for various encryption methods. These systems are often provided by the operating system backed by a standard driver model. In most cases, drivers emulate an Ethernet device and use the configuration and management utilities built into the operating system. In cases where built-in configuration and management support is non-existent or inadequate, hardware manufacturers may include their own software to handle the respective tasks. Microsoft Windows Microsoft Windows has comprehensive driver-level support for Wi-Fi, the quality of which depends on the hardware manufacturer. Hardware manufacturers almost always ship Windows drivers with their products. Windows ships with very few Wi-Fi drivers and depends on the original equipment manufacturers (OEMs) and device manufacturers to make sure users get drivers. Configuration and management depend on the version of Windows. Earlier versions of Windows, such as 98, ME and 2000 do not have built-in configuration and management support and must depend on software provided by the manufacturer Microsoft Windows XP has built-in configuration and management support. The original shipping version of Windows XP included rudimentary support which was dramatically improved in Service Pack 2. Support for WPA2 and some other security protocols require updates from Microsoft. Many hardware manufacturers include their own software and require the user to disable Windows’ built-in Wi-Fi support. Windows Vista, Win" https://en.wikipedia.org/wiki/Food%20rheology,"Food rheology is the study of the rheological properties of food, that is, the consistency and flow of food under tightly specified conditions. The consistency, degree of fluidity, and other mechanical properties are important in understanding how long food can be stored, how stable it will remain, and in determining food texture. The acceptability of food products to the consumer is often determined by food texture, such as how spreadable and creamy a food product is. Food rheology is important in quality control during food manufacture and processing. Food rheology terms have been noted since ancient times. In ancient Egypt, bakers judged the consistency of dough by rolling it in their hands. Overview There is a large body of literature on food rheology because the study of food rheology entails unique factors beyond an understanding of the basic rheological dynamics of the flow and deformation of matter. Food can be classified according to its rheological state, such as a solid, gel, liquid, emulsion with associated rheological behaviors, and its rheological properties can be measured. These properties will affect the design of food processing plants, as well as shelf life and other important factors, including sensory properties that appeal to consumers. Because foods are structurally complex, often a mixture of fluid and solids with varying properties within a single mass, the study of food rheology is more complicated than study in fields such as the rheology of polymers. However, food rheology is something we experience every day with our perception of food texture (see below) and basic concepts of food rheology well apply to polymers physics, oil flow etc. For this reason, examples of food rheology are didactically useful to explain the dynamics of other materials we are less familiar with. Ketchup is commonly used an example of Bingham fluid and its flow behavior can be compared to that of a polymer melt. Psychorheology Psychorheology is the " https://en.wikipedia.org/wiki/Chamber%20of%20Computer%20Engineers%20of%20Turkey,"Chamber of Computer Engineers of Turkey (, abbreviated BMO) was founded on 2 June 2012. Formerly, the computer engineers in Turkey were the members of Chamber of Electrical Engineers of Turkey. But, on 9 March 2011 computer engineers decided to form their own chamber. The regulatory board announced that each year about 6,500 new CS engineers (including related undergraduate studies) graduate from the universities. During the general assembly of Union of chambers of Turkish engineers and architects (UCTEA) on the 2 June 2012, the request was approved. The chamber has become the 24th member of the union - UCTEA." https://en.wikipedia.org/wiki/Spurious%20tone,"In electronics (radio in particular), a spurious tone (also known as an interfering tone, a continuous tone or a spur) denotes a tone in an electronic circuit which interferes with a signal and is often masked underneath that signal. Spurious tones are any tones other than a fundamental tone or its harmonics. They also include tones generated within the back-to-back connected transmit and receive terminal or channel units, when the fundamental is applied to the transmit terminal or channel-unit input." https://en.wikipedia.org/wiki/Tasmanian%20coniferous%20shrubbery,"The vegetation in Tasmania's alpine environments is predominately woody and shrub-like. One vegetation type is coniferous shrubbery, characterised by the gymnosperm species Microcachrys tetragona, Pherosphaera hookeriana, Podocarpus lawrencei, and Diselma archeri. Distribution of these species is relevant with abiotic factors including edaphic conditions and fire frequency, and increasingly, the threat of climate change towards species survival exists. Conservation and management of coniferous shrubbery are necessary considering that the paleoendemic species, Microcachrys, Pherosphaera and Diselma, have persisted in western Tasmanian environments for millions of years. Distribution These coniferous shrub species are restricted to subalpine and alpine heathlands in western Tasmania, with the exception of Podocarpus lawrencei which lives on the mainland. The alpine environments where these conifers occur have high levels of conifer endemism, which is an ecologically habitat for coniferous shrub species. Coniferous shrub species can be observed in Mount Field National Park in Tasmania's south west along the Tarn Shelf. All species can be observed in rocky environments with shallow soil above . Ecology Both the alpine environment and the harsh maritime climate have the pressures and limitations of wind exposure and ice abrasion for the woody and shrub-like habit of coniferous shrubbery. The lack of protective snow cover on Tasmanian mountains means that vegetation must be mechanically resistant to these elements, hence an ecologically habitat for coniferous shrub species. This is contrasted to alps of mainland Australia or New Zealand, where the presence of prolonged snow lie lead to the development of a grassland-herbland vegetation community. Low productivity of the environment is indicated through the slow growth habit of the conifers, and the effects of fire are detrimental to the species. As well as this, physiological drought intolerance in conifers could in" https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics,"This is a list of algebraic coding theory topics. Algebraic coding theory" https://en.wikipedia.org/wiki/List%20of%20unusual%20units%20of%20measurement,"An unusual unit of measurement is a unit of measurement that does not form part of a coherent system of measurement, especially because its exact quantity may not be well known or because it may be an inconvenient multiple or fraction of a base unit. Many of the unusual units of measurements listed here are colloquial measurements, units devised to compare a measurement to common and familiar objects. Length Hammer unit Valve's Source game engine uses the Hammer unit as its base unit of length. This unit refers to Source's official map creation software, Hammer. The exact definition varies from game to game, but a Hammer unit is usually defined as a sixteenth of a foot (16 Hammer units = 1 foot). This means that 1 Hammer unit is equal to exactly . Rack unit One rack unit (U) is and is used to measure rack-mountable audiovisual, computing and industrial equipment. Rack units are typically denoted without a space between the number of units and the 'U'. Thus, a 4U server enclosure (case) is high, or more practically, built to occupy a vertical space seven inches high, with sufficient clearance to allow movement of adjacent hardware. Hand The hand is a non-SI unit of length equal to exactly . It is normally used to measure the height of horses in some English-speaking countries, including Australia, Canada, Ireland, the United Kingdom, and the United States. It is customary when measuring in hands to use a point to indicate inches (quarter-hands) and not tenths of a hand. For example, 15.1 hands normally means 15 hands, 1 inch (5 ft 1 in), rather than 15 hands. Light-nanosecond The light-nanosecond is defined as exactly 29.9792458 cm. It was popularized in information technology as a unit of distance by Grace Hopper as the distance which a photon could travel in one billionth of a second (roughly 30 cm or one foot): ""The speed of light is one foot per nanosecond."" Metric feet A metric foot, defined as ), has been used occasionally in the UK but has never b" https://en.wikipedia.org/wiki/List%20of%20algebras,"This is a list of possibly nonassociative algebras. An algebra is a module, wherein you can also multiply two module elements. (The multiplication in the module is compatible with multiplication-by-scalars from the base ring). *-algebra Akivis algebra Algebra for a monad Albert algebra Alternative algebra Azumaya algebra Banach algebra Birman–Wenzl algebra Boolean algebra Borcherds algebra Brauer algebra C*-algebra Central simple algebra Clifford algebra Cluster algebra Dendriform algebra Differential graded algebra Differential graded Lie algebra Exterior algebra F-algebra Filtered algebra Flexible algebra Freudenthal algebra Genetic algebra Geometric algebra Gerstenhaber algebra Graded algebra Griess algebra Group algebra Group algebra of a locally compact group Hall algebra Hecke algebra of a locally compact group Heyting algebra Hopf algebra Hurwitz algebra Hypercomplex algebra Incidence algebra Iwahori–Hecke algebra Jordan algebra Kac–Moody algebra Kleene algebra Leibniz algebra Lie algebra Lie superalgebra Malcev algebra Matrix algebra Non-associative algebra Octonion algebra Pre-Lie algebra Poisson algebra Process algebra Quadratic algebra Quaternion algebra Rees algebra Relation algebra Relational algebra Schur algebra Semisimple algebra Separable algebra Shuffle algebra Sigma-algebra Simple algebra Structurable algebra Supercommutative algebra Symmetric algebra Tensor algebra Universal enveloping algebra Vertex operator algebra von Neumann algebra Weyl algebra Zinbiel algebra This is a list of fields of algebra. Linear algebra Homological algebra Universal algebra Algebras" https://en.wikipedia.org/wiki/EKV%20MOSFET%20model,"The EKV Mosfet model is a mathematical model of metal-oxide semiconductor field-effect transistors (MOSFET) which is intended for circuit simulation and analog circuit design. It was developed by Christian C. Enz, François Krummenacher and Eric A. Vittoz (hence the initials EKV) around 1995 based in part on work they had done in the 1980s. Unlike simpler models like the Quadratic Model, the EKV Model is accurate even when the MOSFET is operating in the subthreshold region (e.g. when Vbulk=Vsource then the MOSFET is subthreshold when Vgate-source < VThreshold). In addition, it models many of the specialized effects seen in submicrometre CMOS IC design. See also Transistor models MOSFET Ngspice SPICE" https://en.wikipedia.org/wiki/Octacube%20%28sculpture%29,"The Octacube is a large, stainless steel sculpture displayed in the mathematics department of Pennsylvania State University in State College, PA. The sculpture represents a mathematical object called the 24-cell or ""octacube"". Because a real 24-cell is four-dimensional, the artwork is actually a projection into the three-dimensional world. Octacube has very high intrinsic symmetry, which matches features in chemistry (molecular symmetry) and physics (quantum field theory). The sculpture was designed by , a mathematics professor at Pennsylvania State University. The university's machine shop spent over a year completing the intricate metal-work. Octacube was funded by an alumna in memory of her husband, Kermit Anderson, who died in the September 11 attacks. Artwork The Octacube's metal skeleton measures about in all three dimensions. It is a complex arrangement of unpainted, tri-cornered flanges. The base is a high granite block, with some engraving. The artwork was designed by Adrian Ocneanu, a Penn State mathematics professor. He supplied the specifications for the sculpture's 96 triangular pieces of stainless steel and for their assembly. Fabrication was done by Penn State's machine shop, led by Jerry Anderson. The work took over a year, involving bending and welding as well as cutting. Discussing the construction, Ocneanu said: It's very hard to make 12 steel sheets meet perfectly—and conformally—at each of the 23 vertices, with no trace of welding left. The people who built it are really world-class experts and perfectionists—artists in steel. Because of the reflective metal at different angles, the appearance is pleasantly strange. In some cases, the mirror-like surfaces create an illusion of transparency by showing reflections from unexpected sides of the structure. The sculpture's mathematician creator commented: When I saw the actual sculpture, I had quite a shock. I never imagined the play of light on the surfaces. There are subtle optical effects t" https://en.wikipedia.org/wiki/Svenska%20Spindlar,"The book or (Swedish and Latin, respectively, for ""Swedish spiders"") is one of the major works of the Swedish arachnologist and entomologist Carl Alexander Clerck and was first published in Stockholm in the year 1757. It was the first comprehensive book on the spiders of Sweden and one of the first regional monographs of a group of animals worldwide. The full title of the work is – , (""Swedish spiders into their main genera separated, and as sixty and a few particular species described and with illuminated figures illustrated"") and included 162 pages of text (eight pages were unpaginated) and six colour plates. It was published in Swedish, with a Latin translation printed in a slightly smaller font below the Swedish text. Clerck described in detail 67 species of Swedish spiders, and for the first time in a zoological work consistently applied binomial nomenclature as proposed by Carl Linnaeus. Linnaeus had originally invented this system for botanical names in his 1753 work Species Plantarum, and presented it again in 1758 in the 10th edition of Systema Naturae for more than 4,000 animal species. Svenska Spindlar is the only pre-Linnaean source to be recognised as a taxonomic authority for such names. Presentation of the spiders Clerck explained in the last (9th of the 2nd part) chapter of his work that in contrast to previous authors he used the term ""spider"" in the strict sense, for animals possessing eight eyes and separated prosoma and opisthosoma, and that his concept of this group of animals did not include Opiliones (because they had two eyes and a broadly joined prosoma and opisthosoma) and other groups of arachnids. For all spiders Clerck used a single generic name (Araneus), to which was added a specific name which consisted of only one word. Each species was presented in the Swedish text with their Latin scientific names, followed by detailed information containing the exact dates when he had found the animals, and a detailed description of eyes," https://en.wikipedia.org/wiki/Remote%20infrastructure%20management,"Remote infrastructure management (RIM) is the remote management of information technology (IT) infrastructure. This can include the management of computer hardware and software, such as workstations (desktops, laptops, notebooks, etc.), servers, network devices, storage devices, IT security devices, etc. of a company. Major sub-services included in RIM are: Service desk / Help desk Proactive monitoring of server and network devices Workstation management Server Management Storage management Application support IT security Management and database management. See also Remote monitoring and management Network monitoring Network performance management Systems management Comparison of network monitoring systems" https://en.wikipedia.org/wiki/Adaptive%20beamformer,"An adaptive beamformer is a system that performs adaptive spatial signal processing with an array of transmitters or receivers. The signals are combined in a manner which increases the signal strength to/from a chosen direction. Signals to/from other directions are combined in a benign or destructive manner, resulting in degradation of the signal to/from the undesired direction. This technique is used in both radio frequency and acoustic arrays, and provides for directional sensitivity without physically moving an array of receivers or transmitters. Motivation/Applications Adaptive beamforming was initially developed in the 1960s for the military applications of sonar and radar. There exist several modern applications for beamforming, one of the most visible applications being commercial wireless networks such as LTE. Initial applications of adaptive beamforming were largely focused in radar and electronic countermeasures to mitigate the effect of signal jamming in the military domain. Radar uses can be seen here Phased array radar. Although not strictly adaptive, these radar applications make use of either static or dynamic (scanning) beamforming. Commercial wireless standards such as 3GPP Long Term Evolution (LTE (telecommunication)) and IEEE 802.16 WiMax rely on adaptive beamforming to enable essential services within each standard. Basic Concepts An adaptive beamforming system relies on principles of wave propagation and phase relationships. See Constructive interference, and Beamforming. Using the principles of superimposing waves, a higher or lower amplitude wave is created (e.g. by delaying and weighting the signal received). The adaptive beamforming system dynamically adapts in order to maximize or minimize a desired parameter, such as Signal-to-interference-plus-noise ratio. Adaptive Beamforming Schemes There are several ways to approach the beamforming design, the first approach was implemented by maximizing the signal to noise ratio (SNR) by Appleb" https://en.wikipedia.org/wiki/List%20of%20complex%20analysis%20topics,"Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematics that investigates functions of complex numbers. It is useful in many branches of mathematics, including number theory and applied mathematics; as well as in physics, including hydrodynamics, thermodynamics, and electrical engineering. Overview Complex numbers Complex plane Complex functions Complex derivative Holomorphic functions Harmonic functions Elementary functions Polynomial functions Exponential functions Trigonometric functions Hyperbolic functions Logarithmic functions Inverse trigonometric functions Inverse hyperbolic functions Residue theory Isometries in the complex plane Related fields Number theory Hydrodynamics Thermodynamics Electrical engineering Local theory Holomorphic function Antiholomorphic function Cauchy–Riemann equations Conformal mapping Conformal welding Power series Radius of convergence Laurent series Meromorphic function Entire function Pole (complex analysis) Zero (complex analysis) Residue (complex analysis) Isolated singularity Removable singularity Essential singularity Branch point Principal branch Weierstrass–Casorati theorem Landau's constants Holomorphic functions are analytic Schwarzian derivative Analytic capacity Disk algebra Growth and distribution of values Ahlfors theory Bieberbach conjecture Borel–Carathéodory theorem Corona theorem Hadamard three-circle theorem Hardy space Hardy's theorem Maximum modulus principle Nevanlinna theory Paley–Wiener theorem Progressive function Value distribution theory of holomorphic functions Contour integrals Line integral Cauchy's integral theorem Cauchy's integral formula Residue theorem Liouville's theorem (complex analysis) Examples of contour integration Fundamental theorem of algebra Simply connected Winding number Principle of the argument Rouché's theorem Bromwich integral Morera's theorem Mellin transform Kramers–Kronig relation, a. k. a. " https://en.wikipedia.org/wiki/Coherent%20turbulent%20structure,"Turbulent flows are complex multi-scale and chaotic motions that need to be classified into more elementary components, referred to coherent turbulent structures. Such a structure must have temporal coherence, i.e. it must persist in its form for long enough periods that the methods of time-averaged statistics can be applied. Coherent structures are typically studied on very large scales, but can be broken down into more elementary structures with coherent properties of their own, such examples include hairpin vortices. Hairpins and coherent structures have been studied and noticed in data since the 1930s, and have been since cited in thousands of scientific papers and reviews. Flow visualization experiments, using smoke and dye as tracers, have been historically used to simulate coherent structures and verify theories, but computer models are now the dominant tools widely used in the field to verify and understand the formation, evolution, and other properties of such structures. The kinematic properties of these motions include size, scale, shape, vorticity, energy, and the dynamic properties govern the way coherent structures grow, evolve, and decay. Most coherent structures are studied only within the confined forms of simple wall turbulence, which approximates the coherence to be steady, fully developed, incompressible, and with a zero pressure gradient in the boundary layer. Although such approximations depart from reality, they contain sufficient parameters needed to understand turbulent coherent structures in a highly conceptual degree. History and Discovery The presence of organized motions and structures in turbulent shear flows was apparent for a long time, and has been additionally implied by mixing length hypothesis even before the concept was explicitly stated in literature. There were also early correlation data found by measuring jets and turbulent wakes, particularly by Corrsin and Roshko. Hama's hydrogen bubble technique, which used flow visu" https://en.wikipedia.org/wiki/Rigidity%20%28mathematics%29,"In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect. The above statement does not define a mathematical property; instead, it describes in what sense the adjective ""rigid"" is typically used in mathematics, by mathematicians. Examples Some examples include: Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values. Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem. By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point. Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X. Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure. A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique. Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules. Alexandrov's uniqueness theor" https://en.wikipedia.org/wiki/Universal%20parabolic%20constant,"The universal parabolic constant is a mathematical constant. It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P. In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.) The value of P is . The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not. Derivation Take as the equation of the parabola. The focal parameter is and the semilatus rectum is . Properties P is a transcendental number. Proof. Suppose that P is algebraic. Then must also be algebraic. However, by the Lindemann–Weierstrass theorem, would be transcendental, which is not the case. Hence P is transcendental. Since P is transcendental, it is also irrational. Applications The average distance from a point randomly selected in the unit square to its center is Proof. There is also an interesting geometrical reason why this constant appears in unit squares. The average distance between a center of a unit square and a point on the square's boundary is . If we uniformly sample every point on the perimeter of the square, take line segments (drawn from the center) corresponding to each point, add them together by joining each line segment next to the other, scaling them down, the curve obtained is a parabola." https://en.wikipedia.org/wiki/List%20of%20mathematical%20properties%20of%20points,"In mathematics, the following appear: Algebraic point Associated point Base point Closed point Divisor point Embedded point Extreme point Fermat point Fixed point Focal point Geometric point Hyperbolic equilibrium point Ideal point Inflection point Integral point Isolated point Generic point Heegner point Lattice hole, Lattice point Lebesgue point Midpoint Napoleon points Non-singular point Normal point Parshin point Periodic point Pinch point Point (geometry) Point source Rational point Recurrent point Regular point, Regular singular point Saddle point Semistable point Separable point Simple point Singular point of a curve Singular point of an algebraic variety Smooth point Special point Stable point Torsion point Vertex (curve) Weierstrass point Calculus Critical point (aka stationary point), any value v in the domain of a differentiable function of any real or complex variable, such that the derivative of v is 0 or undefined Geometry Antipodal point, the point diametrically opposite to another point on a sphere, such that a line drawn between them passes through the centre of the sphere and forms a true diameter Conjugate point, any point that can almost be joined to another by a 1-parameter family of geodesics (e.g., the antipodes of a sphere, which are linkable by any meridian Vertex (geometry), a point that describes a corner or intersection of a geometric shape Apex (geometry), the vertex that is in some sense the highest of the figure to which it belongs Topology Adherent point, a point x in topological space X such that every open set containing x contains at least one point of a subset A Condensation point, any point p of a subset S of a topological space, such that every open neighbourhood of p contains uncountably many points of S Limit point, a set S in a topological space X is a point x (which is in X, but not necessarily in S) that can be approximated by points of S, since every neighbourhood o" https://en.wikipedia.org/wiki/Fastest%20animals,"This is a list of the fastest animals in the world, by types of animal. Fastest organism The peregrine falcon is the fastest bird, and the fastest member of the animal kingdom, with a diving speed of over . The fastest land animal is the cheetah. Among the fastest animals in the sea is the black marlin, with uncertain and conflicting reports of recorded speeds. When drawing comparisons between different classes of animals, an alternative unit is sometimes used for organisms: body length per second. On this basis the 'fastest' organism on earth, relative to its body length, is the Southern Californian mite, Paratarsotomus macropalpis, which has a speed of 322 body lengths per second. The equivalent speed for a human, running as fast as this mite, would be , or approximately Mach 1.7. The speed of the P. macropalpis is far in excess of the previous record holder, the Australian tiger beetle Cicindela eburneola, which is the fastest insect in the world relative to body size, with a recorded speed of , or 171 body lengths per second. The cheetah, the fastest land mammal, scores at only 16 body lengths per second, while Anna's hummingbird has the highest known length-specific velocity attained by any vertebrate. Invertebrates Fish Due to physical constraints, fish may be incapable of exceeding swim speeds of 36 km/h (22 mph). The larger reported figures below are therefore highly questionable: Amphibians Reptiles Birds Mammals See also Speed records Notes" https://en.wikipedia.org/wiki/Low-voltage%20detect,"A low-voltage detect (LVD) is a microcontroller or microprocessor peripheral that generates a reset signal when the Vcc supply voltage falls below Vref. Sometimes is combined with power-on reset (POR) and then it is called POR-LVD. See also Power-on reset Embedded systems" https://en.wikipedia.org/wiki/Data%20engineering,"Data engineering refers to the building of systems to enable the collection and usage of data. This data is usually used to enable subsequent analysis and data science; which often involves machine learning. Making the data usable usually involves substantial compute and storage, as well as data processing History Around the 1970s/1980s the term information engineering methodology (IEM) was created to describe database design and the use of software for data analysis and processing. These techniques were intended to be used by database administrators (DBAs) and by systems analysts based upon an understanding of the operational processing needs of organizations for the 1980s. In particular, these techniques were meant to help bridge the gap between strategic business planning and information systems. A key early contributor (often called the ""father"" of information engineering methodology) was the Australian Clive Finkelstein, who wrote several articles about it between 1976 and 1980, and also co-authored an influential Savant Institute report on it with James Martin. Over the next few years, Finkelstein continued work in a more business-driven direction, which was intended to address a rapidly changing business environment; Martin continued work in a more data processing-driven direction. From 1983 to 1987, Charles M. Richter, guided by Clive Finkelstein, played a significant role in revamping IEM as well as helping to design the IEM software product (user data), which helped automate IEM. In the early 2000s, the data and data tooling was generally held by the information technology (IT) teams in most companies. Other teams then used data for their work (e.g. reporting), and there was usually little overlap in data skillset between these parts of the business. In the early 2010s, with the rise of the internet, the massive increase in data volumes, velocity, and variety led to the term big data to describe the data itself, and data-driven tech companies like Face" https://en.wikipedia.org/wiki/Macroscopic%20scale,"The macroscopic scale is the length scale on which objects or phenomena are large enough to be visible with the naked eye, without magnifying optical instruments. It is the opposite of microscopic. Overview When applied to physical phenomena and bodies, the macroscopic scale describes things as a person can directly perceive them, without the aid of magnifying devices. This is in contrast to observations (microscopy) or theories (microphysics, statistical physics) of objects of geometric lengths smaller than perhaps some hundreds of micrometers. A macroscopic view of a ball is just that: a ball. A microscopic view could reveal a thick round skin seemingly composed entirely of puckered cracks and fissures (as viewed through a microscope) or, further down in scale, a collection of molecules in a roughly spherical shape (as viewed through an electron microscope). An example of a physical theory that takes a deliberately macroscopic viewpoint is thermodynamics. An example of a topic that extends from macroscopic to microscopic viewpoints is histology. Not quite by the distinction between macroscopic and microscopic, classical and quantum mechanics are theories that are distinguished in a subtly different way. At first glance one might think of them as differing simply in the size of objects that they describe, classical objects being considered far larger as to mass and geometrical size than quantal objects, for example a football versus a fine particle of dust. More refined consideration distinguishes classical and quantum mechanics on the basis that classical mechanics fails to recognize that matter and energy cannot be divided into infinitesimally small parcels, so that ultimately fine division reveals irreducibly granular features. The criterion of fineness is whether or not the interactions are described in terms of Planck's constant. Roughly speaking, classical mechanics considers particles in mathematically idealized terms even as fine as geometrical points wi" https://en.wikipedia.org/wiki/Carleman%20matrix,"In mathematics, a Carleman matrix is a matrix used to convert function composition into matrix multiplication. It is often used in iteration theory to find the continuous iteration of functions which cannot be iterated by pattern recognition alone. Other uses of Carleman matrices occur in the theory of probability generating functions, and Markov chains. Definition The Carleman matrix of an infinitely differentiable function is defined as: so as to satisfy the (Taylor series) equation: For instance, the computation of by simply amounts to the dot-product of row 1 of with a column vector . The entries of in the next row give the 2nd power of : and also, in order to have the zeroth power of in , we adopt the row 0 containing zeros everywhere except the first position, such that Thus, the dot product of with the column vector yields the column vector Generalization A generalization of the Carleman matrix of a function can be defined around any point, such as: or where . This allows the matrix power to be related as: General Series Another way to generalize it even further is think about a general series in the following way: Let be a series approximation of , where is a basis of the space containing We can define , therefore we have , now we can prove that , if we assume that is also a basis for and . Let be such that where . Now Comparing the first and the last term, and from being a base for , and it follows that Examples If we set we have the Carleman matrix If is an orthonormal basis for a Hilbert Space with a defined inner product , we can set and will be . If we have the analogous for Fourier Series, namely Properties Carleman matrices satisfy the fundamental relationship which makes the Carleman matrix M a (direct) representation of . Here the term denotes the composition of functions . Other properties include: , where is an iterated function and , where is the inverse function (if the Carleman matrix is invertib" https://en.wikipedia.org/wiki/Itakura%E2%80%93Saito%20distance,"The Itakura–Saito distance (or Itakura–Saito divergence) is a measure of the difference between an original spectrum and an approximation of that spectrum. Although it is not a perceptual measure, it is intended to reflect perceptual (dis)similarity. It was proposed by Fumitada Itakura and Shuzo Saito in the 1960s while they were with NTT. The distance is defined as: The Itakura–Saito distance is a Bregman divergence generated by minus the logarithmic function, but is not a true metric since it is not symmetric and it does not fulfil triangle inequality. In Non-negative matrix factorization, the Itakura-Saito divergence can be used as a measure of the quality of the factorization: this implies a meaningful statistical model of the components and can be solved through an iterative method. The Itakura-Saito distance is the Bregman divergence associated with the Gamma exponential family where the information divergence of one distribution in the family from another element in the family is given by the Itakura-Saito divergence of the mean value of the first distribution from the mean value of the second distribution. See also Log-spectral distance" https://en.wikipedia.org/wiki/French%20curve,"A French curve is a template usually made from metal, wood or plastic composed of many different segments of the Euler spiral (aka the clothoid curve). It is used in manual drafting and in fashion design to draw smooth curves of varying radii. The curve is placed on the drawing material, and a pencil, knife or other implement is traced around its curves to produce the desired result. They were invented by the German mathematician Ludwig Burmester and are also known as Burmester (curve) set. Clothing design French curves are used in fashion design and sewing alongside hip curves, straight edges and right-angle rulers. Commercial clothing patterns can be personalized for fit by using French curves to draw neckline, sleeve, bust and waist variations. See also" https://en.wikipedia.org/wiki/Foodomics,"Foodomics was defined in 2009 as ""a discipline that studies the Food and Nutrition domains through the application and integration of advanced -omics technologies to improve consumer's well-being, health, and knowledge"". Foodomics requires the combination of food chemistry, biological sciences, and data analysis. The study of foodomics became under the spotlight after it was introduced in the first international conference in 2009 at Cesena, Italy. Many experts in the field of omics and nutrition were invited to this event in order to find the new approach and possibility in the area of food science and technology. However, research and development of foodomics today are still limited due to high throughput analysis required. The American Chemical Society journal called Analytical Chemistry dedicated its cover to foodomics in December 2012. Foodomics involves four main areas of omics: Genomics, which involves investigation of genome and its pattern. Transcriptomics, which explores a set of gene and identifies the difference among various conditions, organisms, and circumstance, by using several techniques including microarray analysis; Proteomics, studies every kind of proteins that is a product of the genes. It covers how protein functions in a particular place, structures, interactions with other proteins, etc.; Metabolomics, includes chemical diversity in the cells and how it affects cell behavior; Advantages of foodomics Foodomics greatly helps the scientists in an area of food science and nutrition to gain a better access to data, which is used to analyze the effects of food on human health, etc. It is believed to be another step towards better understanding of development and application of technology and food. Moreover, the study of foodomics leads to other omics sub-disciplines, including nutrigenomics which is the integration of the study of nutrition, gene and omics. Colon cancer Foodomics approach is used to analyze and establish the links betwee" https://en.wikipedia.org/wiki/List%20of%20optical%20illusions,"This is a list of visual illusions. See also Adaptation (eye) Alice in Wonderland syndrome Auditory illusion Camouflage Contingent perceptual aftereffect Contour rivalry Depth perception Emmert's law Entoptic phenomenon Gestalt psychology Infinity pool Kinetic depth effect Mirage Multistable perception Op Art Notes External links Optical Illusion Examples by Great Optical Illusions Optical Illusions & Visual Phenomena by Michael Bach Optical Illusions Database by Mighty Optical Illusions Optical illusions and perception paradoxes by Archimedes Lab https://web.archive.org/web/20100419004856/http://ilusaodeotica.com/ hundreds of optical illusions Project LITE Atlas of Visual Phenomena Akiyoshi's illusion pages Professor Akiyoshi KITAOKA's anomalous motion illusions Spiral Or Not? by Enrique Zeleny, Wolfram Demonstrations Project Magical Optical Illusions by Rangki Hunch Optical Illusions by Hunch Optical Illusions by Ooh, My Brain! Optical phenomena Articles containing video clips" https://en.wikipedia.org/wiki/Memory%20cell%20%28computing%29,"The memory cell is the fundamental building block of computer memory. The memory cell is an electronic circuit that stores one bit of binary information and it must be set to store a logic 1 (high voltage level) and reset to store a logic 0 (low voltage level). Its value is maintained/stored until it is changed by the set/reset process. The value in the memory cell can be accessed by reading it. Over the history of computing, different memory cell architectures have been used, including core memory and bubble memory. Today, the most common memory cell architecture is MOS memory, which consists of metal–oxide–semiconductor (MOS) memory cells. Modern random-access memory (RAM) uses MOS field-effect transistors (MOSFETs) as flip-flops, along with MOS capacitors for certain types of RAM. The SRAM (static RAM) memory cell is a type of flip-flop circuit, typically implemented using MOSFETs. These require very low power to keep the stored value when not being accessed. A second type, DRAM (dynamic RAM), is based around MOS capacitors. Charging and discharging a capacitor can store a '1' or a '0' in the cell. However, the charge in this capacitor will slowly leak away, and must be refreshed periodically. Because of this refresh process, DRAM uses more power. However, DRAM can achieve greater storage densities. On the other hand, most non-volatile memory (NVM) is based on floating-gate memory cell architectures. Non-volatile memory technologies including EPROM, EEPROM and flash memory use floating-gate memory cells, which are based around floating-gate MOSFET transistors. Description The memory cell is the fundamental building block of memory. It can be implemented using different technologies, such as bipolar, MOS, and other semiconductor devices. It can also be built from magnetic material such as ferrite cores or magnetic bubbles. Regardless of the implementation technology used, the purpose of the binary memory cell is always the same. It stores one bit of binary in" https://en.wikipedia.org/wiki/UDPCast,"UDPcast is a file transfer tool that can send data simultaneously to many destinations on a LAN. This can for instance be used to install entire classrooms of PCs at once. The advantage of UDPcast over using other methods (nfs, ftp, whatever) is that UDPcast uses the User Datagram Protocol's multicast abilities: it won't take longer to install 15 machines than it would to install just 2. By default this protocol operates on the UDP port 9000. This default behaviour can be changed during the boot stage. See also List of disk cloning software External links Free system software Free backup software Free software programmed in Perl Cross-platform software Disk cloning Computer networking" https://en.wikipedia.org/wiki/ATM,"ATM or atm often refers to: Atmosphere (unit) or atm, a unit of atmospheric pressure Automated teller machine, a cash dispenser or cash machine ATM or atm may also refer to: Computing ATM (computer), a ZX Spectrum clone developed in Moscow in 1991 Adobe Type Manager, a computer program for managing fonts Accelerated Turing machine, or Zeno machine, a model of computation used in theoretical computer science Alternating Turing machine, a model of computation used in theoretical computer science Asynchronous Transfer Mode, a telecommunications protocol used in networking ATM adaptation layer ATM Adaptation Layer 5 Media Amateur Telescope Making, a series of books by Albert Graham Ingalls ATM (2012 film), an American film ATM: Er Rak Error, a 2012 Thai film Azhagiya Tamil Magan, a 2007 Indian film ""ATM"" (song), a 2018 song by J. Cole from KOD People and organizations Abiding Truth Ministries, anti-LGBT organization in Springfield, Massachusetts, US Association of Teachers of Mathematics, UK Acrylic Tank Manufacturing, US aquarium manufacturer, televised in Tanked ATM FA, a football club in Malaysia A. T. M. Wilson (1906–1978), British psychiatrist African Transformation Movement, South African political party founded in 2018 The a2 Milk Company (NZX ticker symbol ATM) Science Apollo Telescope Mount, a solar observatory ATM serine/threonine kinase, a serine/threonine kinase activated by DNA damage The Airborne Topographic Mapper, a laser altimeter among the instruments used by NASA's Operation IceBridge Transportation Active traffic management, a motorway scheme on the M42 in England Air traffic management, a concept in aviation Altamira Airport, in Brazil (IATA code ATM) Azienda Trasporti Milanesi, the municipal public transport company of Milan Airlines of Tasmania (ICAO code ATM) Catalonia, Spain Autoritat del Transport Metropolità (ATM Àrea de Barcelona), in the Barcelona metropolitan area Autoritat Territorial de la Mobil" https://en.wikipedia.org/wiki/End-to-end%20encryption,"End-to-end encryption (E2EE) is a private communication system in which only communicating users can participate. As such, no one, including the communication system provider, telecom providers, Internet providers or malicious actors, can access the cryptographic keys needed to converse. End-to-end encryption is intended to prevent data being read or secretly modified, other than by the true sender and recipient(s). The messages are encrypted by the sender but the third party does not have a means to decrypt them, and stores them encrypted. The recipients retrieve the encrypted data and decrypt it themselves. Because no third parties can decipher the data being communicated or stored, for example, companies that provide end-to-end encryption are unable to hand over texts of their customers' messages to the authorities. In 2022, the UK's Information Commissioner's Office, the government body responsible for enforcing online data standards, stated that opposition to E2EE was misinformed and the debate too unbalanced, with too little focus on benefits, since E2EE ""helped keep children safe online"" and law enforcement access to stored data on servers was ""not the only way"" to find abusers. E2EE and privacy In many messaging systems, including email and many chat networks, messages pass through intermediaries and are stored by a third party, from which they are retrieved by the recipient. Even if the messages are encrypted, they are only encrypted 'in transit', and are thus accessible by the service provider, regardless of whether server-side disk encryption is used. Server-side disk encryption simply prevents unauthorized users from viewing this information. It does not prevent the company itself from viewing the information, as they have the key and can simply decrypt this data. This allows the third party to provide search and other features, or to scan for illegal and unacceptable content, but also means they can be read and misused by anyone who has acces" https://en.wikipedia.org/wiki/List%20of%20representation%20theory%20topics,"This is a list of representation theory topics, by Wikipedia page. See also list of harmonic analysis topics, which is more directed towards the mathematical analysis aspects of representation theory. See also: Glossary of representation theory General representation theory Linear representation Unitary representation Trivial representation Irreducible representation Semisimple Complex representation Real representation Quaternionic representation Pseudo-real representation Symplectic representation Schur's lemma Restricted representation Representation theory of groups Group representation Group ring Maschke's theorem Regular representation Character (mathematics) Character theory Class function Representation theory of finite groups Modular representation theory Frobenius reciprocity Restricted representation Induced representation Peter–Weyl theorem Young tableau Spherical harmonic Hecke operator Representation theory of the symmetric group Representation theory of diffeomorphism groups Permutation representation Affine representation Projective representation Central extension Representation theory of Lie groups and Lie algebras Representation of a Lie group Lie algebra representation, Representation of a Lie superalgebra Universal enveloping algebra Casimir element Infinitesimal character Harish-Chandra homomorphism Fundamental representation Antifundamental representation Bifundamental representation Adjoint representation Weight (representation theory) Cartan's theorem Spinor Wigner's classification, Representation theory of the Poincaré group Wigner–Eckart theorem Stone–von Neumann theorem Orbit method Kirillov character formula Weyl character formula Discrete series representation Principal series representation Borel–Weil–Bott theorem Weyl's character formula Representation theory of algebras Algebra representation Representation theory of Hopf algebras Representation theory" https://en.wikipedia.org/wiki/PowerHUB,"PowerHUB refers to the name of a series of Integrated Circuits (ICs) developed by ST-Ericsson, a 50/50 joint venture of Ericsson and STMicroelectronics established on February 3, 2009. These ICs are designed for the energy management and the battery charging of mobile devices. The first member of the PowerHUB family, the PM2300, has been announced by ST-Ericsson on February 9, 2011. On February 28, 2012, ST-Ericsson introduced a new IC in this family, the PM2020, supporting the wireless charging technology standardized by the Wireless Power Consortium (WPC)." https://en.wikipedia.org/wiki/Turing%20pattern,"The Turing pattern is a concept introduced by English mathematician Alan Turing in a 1952 paper titled ""The Chemical Basis of Morphogenesis"" which describes how patterns in nature, such as stripes and spots, can arise naturally and autonomously from a homogeneous, uniform state. The pattern arises due to Turing instability which in turn arises due to the interplay between differential diffusion (i.e., different values of diffusion coefficients) of chemical species and chemical reaction. The instability mechanism is unforeseen because a pure diffusion process would be anticipated to have a stabilizing influence on the system. Overview In his paper, Turing examined the behaviour of a system in which two diffusible substances interact with each other, and found that such a system is able to generate a spatially periodic pattern even from a random or almost uniform initial condition. Prior to the discovery of this instability mechanism arising due to unequal diffusion coefficients of the two substances, diffusional effects were always presumed to have stabilizing influences on the system. Turing hypothesized that the resulting wavelike patterns are the chemical basis of morphogenesis. Turing patterning is often found in combination with other patterns: vertebrate limb development is one of the many phenotypes exhibiting Turing patterning overlapped with a complementary pattern (in this case a French flag model). Before Turing, Yakov Zeldovich in 1944 discovered this instability mechanism in connection with the cellular structures observed in lean hydrogen flames. Zeldovich explained the cellular structure as a consequence of hydrogen's diffusion coefficient being larger than the thermal diffusion coefficient. In combustion literature, Turing instability is referred to as diffusive–thermal instability. Concept The original theory, a reaction–diffusion theory of morphogenesis, has served as an important model in theoretical biology. Reaction–diffusion systems hav" https://en.wikipedia.org/wiki/Liouville%20number,"In number theory, a Liouville number is a real number with the property that, for every positive integer , there exists a pair of integers with such that Liouville numbers are ""almost rational"", and can thus be approximated ""quite closely"" by sequences of rational numbers. Precisely, these are transcendental numbers that can be more closely approximated by rational numbers than any algebraic irrational number can be. In 1844, Joseph Liouville showed that all Liouville numbers are transcendental, thus establishing the existence of transcendental numbers for the first time. It is known that and are not Liouville numbers. The existence of Liouville numbers (Liouville's constant) Liouville numbers can be shown to exist by an explicit construction. For any integer and any sequence of integers such that for all and for infinitely many , define the number In the special case when , and for all , the resulting number is called Liouville's constant: L = 0.11000100000000000000000100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001... It follows from the definition of that its base- representation is where the th term is in the th place. Since this base- representation is non-repeating it follows that is not a rational number. Therefore, for any rational number , . Now, for any integer , and can be defined as follows: Then, Therefore, any such is a Liouville number. Notes on the proof The inequality follows since ak ∈ {0, 1, 2, …, b−1} for all k, so at most ak = b−1. The largest possible sum would occur if the sequence of integers (a1, a2, …) were (b−1, b−1, ...), i.e. ak = b−1, for all k. will thus be less than or equal to this largest possible sum. The strong inequality follows from the motivation to eliminate the series by way of reducing it to a series for which a formula is known. In the proof so far, the purpose for introducing the inequality in #1 comes from intuition that (the geometric s" https://en.wikipedia.org/wiki/Berkeley%20IRAM%20project,"The Berkeley IRAM project was a 1996–2004 research project in the Computer Science Division of the University of California, Berkeley which explored computer architecture enabled by the wide bandwidth between memory and processor made possible when both are designed on the same integrated circuit (chip). Since it was envisioned that such a chip would consist primarily of random-access memory (RAM), with a smaller part needed for the central processing unit (CPU), the research team used the term ""Intelligent RAM"" (or IRAM) to describe a chip with this architecture. Like the J–Machine project at MIT, the primary objective of the research was to avoid the Von Neumann bottleneck which occurs when the connection between memory and CPU is a relatively narrow memory bus between separate integrated circuits. Theory With strong competitive pressures, the technology employed for each component of a computer system—principally CPU, memory, and offline storage—is typically selected to minimize the cost needed to attain a given level of performance. Though both microprocessor and memory are implemented as integrated circuits, the prevailing technology used for each differs; microprocessor technology optimizes speed and memory technology optimizes density. For this reason, the integration of memory and processor in the same chip has (for the most part) been limited to static random-access memory (SRAM), which may be implemented using circuit technology optimized for logic performance, rather than the denser and lower-cost dynamic random-access memory (DRAM), which is not. Microprocessor access to off-chip memory costs time and power, however, significantly limiting processor performance. For this reason computer architecture employing a hierarchy of memory systems has developed, in which static memory is integrated with the microprocessor for temporary, easily accessible storage (or cache) of data which is also retained off-chip in DRAM. Since the on-chip cache memory is redun" https://en.wikipedia.org/wiki/Sulemana%20Abdul%20Samed,"Sulemana Abdul Samed, also known as Awuche (meaning 'Let's Go' in the Hausa language), is the tallest man in Ghana. He was born in 1994 in the Northern Region of Ghana. Abdul Samed was diagnosed with the endocrine disorder acromegaly, which is caused by an excess of growth hormone in the body. An investigation by a BBC reporter revealed that Samed was only 7 feet 4 inches (223 cm), suggesting that the hospital at which he had been measured had made a ""mistake"" when other sources reported a larger height. He has undergone treatment for his condition. Despite his unusual height, Abdul Samed has lived a relatively normal life, attending school and being employed as a farmer and a mechanic. He has stated that he hopes to marry and have children. Abdul Samed has received media attention for his height, which he has used to raise awareness about acromegaly and the challenges faced by people who have the condition." https://en.wikipedia.org/wiki/Ouchterlony%20double%20immunodiffusion,"Ouchterlony double immunodiffusion (also known as passive double immunodiffusion) is an immunological technique used in the detection, identification and quantification of antibodies and antigens, such as immunoglobulins and extractable nuclear antigens. The technique is named after Örjan Ouchterlony, the Swedish physician who developed the test in 1948 to evaluate the production diphtheria toxins from isolated bacteria. Procedure A gel plate is cut to form a series of holes (""wells"") in an agar or agarose gel. A sample extract of interest (for example human cells harvested from tonsil tissue) is placed in one well, sera or purified antibodies are placed in another well and the plate left for 48 hours to develop. During this time the antigens in the sample extract and the antibodies each diffuse out of their respective wells. Where the two diffusion fronts meet, if any of the antibodies recognize any of the antigens, they will bind to the antigens and form an immune complex. The immune complex precipitates in the gel to give a thin white line (precipitin line), which is a visual signature of antigen recognition. The method can be conducted in parallel with multiple wells filled with different antigen mixtures and multiple wells with different antibodies or mixtures of antibodies, and antigen-antibody reactivity can be seen by observing between which wells the precipitate is observed. When more than one well is used there are many possible outcomes based on the reactivity of the antigen and antibody selected. The zone of equivalence lines may give a full identity (i.e. a continuous line), partial identity (i.e. a continuous line with a spur at one end), or a non-identity (i.e. the two lines cross completely). The sensitivity of the assay can be increased by using a stain such as Coomassie brilliant blue, this is done by repeated staining and destaining of the assay until the precipitin lines are at maximum visibility. Theory Precipitation occurs with most antige" https://en.wikipedia.org/wiki/Melanoidin,"Melanoidins are brown, high molecular weight heterogeneous polymers that are formed when sugars and amino acids combine (through the Maillard reaction) at high temperatures and low water activity. They were discovered by Schmiedeberg in 1897. Melanoidins are commonly present in foods that have undergone some form of non-enzymatic browning, such as barley malts (Vienna and Munich), bread crust, bakery products and coffee. They are also present in the wastewater of sugar refineries, necessitating treatment in order to avoid contamination around the outflow of these refineries. Dietary melanoidins themselves produce various effects in the organism: they decrease Phase I liver enzyme activity and promote glycation in vivo, which may contribute to diabetes, reduced vascular compliance and Alzheimer's disease. Some of the melanoidins are metabolized by the intestinal microflora. Coffee is one of the main sources of melanoidins in the human diet, yet coffee consumption is associated with some health benefits and antiglycative action." https://en.wikipedia.org/wiki/Softwire%20%28protocol%29,"In computer networking, a softwire protocol is a type of tunneling protocol that creates a virtual ""wire"" that transparently encapsulates another protocol as if it was an anonymous point-to-point low-level link. Softwires are used for various purposes, one of which is to carry IPv4 traffic over IPv6 and vice versa, in order to support IPv6 transition mechanisms." https://en.wikipedia.org/wiki/Developmental%20systems%20theory,"Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes. Overview All versions of developmental systems theory espouse the view that: All biological processes (including both evolution and development) operate by continually assembling new structures. Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws. Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms. Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for. In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any p" https://en.wikipedia.org/wiki/Franz%E2%80%93Keldysh%20effect,"The Franz–Keldysh effect is a change in optical absorption by a semiconductor when an electric field is applied. The effect is named after the German physicist Walter Franz and Russian physicist Leonid Keldysh. Karl W. Böer observed first the shift of the optical absorption edge with electric fields during the discovery of high-field domains and named this the Franz-effect. A few months later, when the English translation of the Keldysh paper became available, he corrected this to the Franz–Keldysh effect. As originally conceived, the Franz–Keldysh effect is the result of wavefunctions ""leaking"" into the band gap. When an electric field is applied, the electron and hole wavefunctions become Airy functions rather than plane waves. The Airy function includes a ""tail"" which extends into the classically forbidden band gap. According to Fermi's golden rule, the more overlap there is between the wavefunctions of a free electron and a hole, the stronger the optical absorption will be. The Airy tails slightly overlap even if the electron and hole are at slightly different potentials (slightly different physical locations along the field). The absorption spectrum now includes a tail at energies below the band gap and some oscillations above it. This explanation does, however, omit the effects of excitons, which may dominate optical properties near the band gap. The Franz–Keldysh effect occurs in uniform, bulk semiconductors, unlike the quantum-confined Stark effect, which requires a quantum well. Both are used for electro-absorption modulators. The Franz–Keldysh effect usually requires hundreds of volts, limiting its usefulness with conventional electronics – although this is not the case for commercially available Franz–Keldysh-effect electro-absorption modulators that use a waveguide geometry to guide the optical carrier. Effect on modulation spectroscopy The absorption coefficient is related to the dielectric constant (especially the complex part 2). From Maxwell's" https://en.wikipedia.org/wiki/Quantum%20pseudo-telepathy,"Quantum pseudo-telepathy is the fact that in certain Bayesian games with asymmetric information, players who have access to a shared physical system in an entangled quantum state, and who are able to execute strategies that are contingent upon measurements performed on the entangled physical system, are able to achieve higher expected payoffs in equilibrium than can be achieved in any mixed-strategy Nash equilibrium of the same game by players without access to the entangled quantum system. In their 1999 paper, Gilles Brassard, Richard Cleve and Alain Tapp demonstrated that quantum pseudo-telepathy allows players in some games to achieve outcomes that would otherwise only be possible if participants were allowed to communicate during the game. This phenomenon came to be referred to as quantum pseudo-telepathy, with the prefix pseudo referring to the fact that quantum pseudo-telepathy does not involve the exchange of information between any parties. Instead, quantum pseudo-telepathy removes the need for parties to exchange information in some circumstances. By removing the need to engage in communication to achieve mutually advantageous outcomes in some circumstances, quantum pseudo-telepathy could be useful if some participants in a game were separated by many light years, meaning that communication between them would take many years. This would be an example of a macroscopic implication of quantum non-locality. Quantum pseudo-telepathy is generally used as a thought experiment to demonstrate the non-local characteristics of quantum mechanics. However, quantum pseudo-telepathy is a real-world phenomenon which can be verified experimentally. It is thus an especially striking example of an experimental confirmation of Bell inequality violations. Games of asymmetric information A Bayesian game is a game in which both players have imperfect information regarding the value of certain parameters. In a Bayesian game it is sometimes the case that for at least some pla" https://en.wikipedia.org/wiki/System%20appreciation,"System appreciation is an activity often included in the maintenance phase of software engineering projects. Key deliverables from this phase include documentation that describes what the system does in terms of its functional features, and how it achieves those features in terms of its architecture and design. Software architecture recovery is often the first step within System appreciation." https://en.wikipedia.org/wiki/Multitaper,"In signal processing, multitaper is a spectral density estimation technique developed by David J. Thomson. It can estimate the power spectrum SX of a stationary ergodic finite-variance random process X, given a finite contiguous realization of X as data. Motivation The multitaper method overcomes some of the limitations of non-parametric Fourier analysis. When applying the Fourier transform to extract spectral information from a signal, we assume that each Fourier coefficient is a reliable representation of the amplitude and relative phase of the corresponding component frequency. This assumption, however, is not generally valid for empirical data. For instance, a single trial represents only one noisy realization of the underlying process of interest. A comparable situation arises in statistics when estimating measures of central tendency i.e., it is bad practice to estimate qualities of a population using individuals or very small samples. Likewise, a single sample of a process does not necessarily provide a reliable estimate of its spectral properties. Moreover, the naive power spectral density obtained from the signal's raw Fourier transform is a biased estimate of the true spectral content. These problems are often overcome by averaging over many realizations of the same event after applying a taper to each trial. However, this method is unreliable with small data sets and undesirable when one does not wish to attenuate signal components that vary across trials. Furthermore, even when many trials are available the untapered periodogram is generally biased (with the exception of white noise) and the bias depends upon the length of each realization, not the number of realizations recorded. Applying a single taper reduces bias but at the cost of increased estimator variance due to attenuation of activity at the start and end of each recorded segment of the signal. The multitaper method partially obviates these problems by obtaining multiple independent e" https://en.wikipedia.org/wiki/Manganese%20in%20biology,"Manganese is an essential biological element in all organisms. It is used in many enzymes and proteins. It is essential in plants. Biochemistry The classes of enzymes that have manganese cofactors include oxidoreductases, transferases, hydrolases, lyases, isomerases and ligases. Other enzymes containing manganese are arginase and Mn-containing superoxide dismutase (Mn-SOD). Also the enzyme class of reverse transcriptases of many retroviruses (though not lentiviruses such as HIV) contains manganese. Manganese-containing polypeptides are the diphtheria toxin, lectins and integrins. Biological role in humans Manganese is an essential human dietary element. It is present as a coenzyme in several biological processes, which include macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. The human body contains about 12 mg of manganese, mostly in the bones. The soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. Nutrition Dietary recommendations The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for minerals in 2001. For manganese there was not sufficient information to set EARs and RDAs, so needs are described as estimates for Adequate Intakes (AIs). As for safety, the IOM sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of manganese the adult UL is set at 11 mg/day. Collectively the EARs, RDAs, AIs and ULs are referred to as Dietary Reference Intakes (DRIs). Manganese deficiency is rare. The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL defined the s" https://en.wikipedia.org/wiki/Swiss%20Network%20Operators%20Group,"The Swiss Network Operators Group (SwiNOG) is a Swiss counterpart to NANOG. Like NANOG, SwiNOG operates a mailing list for operators of Swiss data networks, including ISPs. Events Twice a year the community gathers in Bern, the capitol of Switzerland for a social gathering containing technical presentations and of course direct interaction between the people in the community. Usually these talks are very technical and can contain various topics related to the work of network operators like out-of-band management. Of course there are also more high-level presentations like the one about SDN and NFV. Usually some months before the event, someone from the SwiNOG-Core-Team sends out a CfP. On a monthly basis, Steven Glogger is also organizing the SwiNOG Beer Events. In the past there where already more than 100 events, taken place in the city of Zurich, a social gathering where people talk about technology, their employer and sometimes also about customers but mainly to exchange information to each other in an offline mode. History See also Internet network operators' group" https://en.wikipedia.org/wiki/System%20basis%20chip,"A system basis chip (SBC) is an integrated circuit that includes various functions of automotive electronic control units (ECU) on a single die. It typically includes a mixture between digital standard functionality like communication bus interfaces and analog or power functionality, denoted as smart power. Therefore SBCs are based on special smart power technology platforms. The embedded functions may include: Voltage regulators Supervision functions Reset generators, Watchdog functions Bus interfaces, like Local Interconnect Network (LIN), CAN bus or others Wake-up logic Power switches The complexity range for SBC starts with rather simple hardwired devices to configurable state-machine controlled devices (e.g. through a serial peripheral interface). Various major automotive semiconductor manufacturers offer SBCs." https://en.wikipedia.org/wiki/Stationary%20process,"In mathematics and statistics, a stationary process (or a strict/strictly stationary process or strong/strongly stationary process) is a stochastic process whose unconditional joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance also do not change over time. If you draw a line through the middle of a stationary process then it should be flat; it may have 'seasonal' cycles around the trend line, but overall it does not trend up nor down. Since stationarity is an assumption underlying many statistical procedures used in time series analysis, non-stationary data are often transformed to become stationary. The most common cause of violation of stationarity is a trend in the mean, which can be due either to the presence of a unit root or of a deterministic trend. In the former case of a unit root, stochastic shocks have permanent effects, and the process is not mean-reverting. In the latter case of a deterministic trend, the process is called a trend-stationary process, and stochastic shocks have only transitory effects after which the variable tends toward a deterministically evolving (non-constant) mean. A trend stationary process is not strictly stationary, but can easily be transformed into a stationary process by removing the underlying trend, which is solely a function of time. Similarly, processes with one or more unit roots can be made stationary through differencing. An important type of non-stationary process that does not include a trend-like behavior is a cyclostationary process, which is a stochastic process that varies cyclically with time. For many applications strict-sense stationarity is too restrictive. Other forms of stationarity such as wide-sense stationarity or N-th-order stationarity are then employed. The definitions for different kinds of stationarity are not consistent among different authors (see Other terminology). Strict-sense stationarity Definition Formally, let be a " https://en.wikipedia.org/wiki/Network%20diagram%20software,"A number of tools exist to generate computer network diagrams. Broadly, there are four types of tools that help create network maps and diagrams: Hybrid tools Network Mapping tools Network Monitoring tools Drawing tools Network mapping and drawing software support IT systems managers to understand the hardware and software services on a network and how they are interconnected. Network maps and diagrams are a component of network documentation. They are required artifacts to better manage IT systems' uptime, performance, security risks, plan network changes and upgrades. Hybrid tools These tools have capabilities in common with drawing tools and network monitoring tools. They are more specialized than general drawing tools and provide network engineers and IT systems administrators a higher level of automation and the ability to develop more detailed network topologies and diagrams. Typical capabilities include but not limited to: Displaying port / interface information on connections between devices on the maps Visualizing VLANs / subnets Visualizing virtual servers and storage Visualizing flow of network traffic across devices and networks Displaying WAN and LAN maps by location Importing network configuration files to generate topologies automatically Network mapping tools These tools are specifically designed to generate automated network topology maps. These visual maps are automatically generated by scanning the network using network discovery protocols. Some of these tools integrate into documentation and monitoring tools. Typical capabilities include but not limited to: Automatically scanning the network using SNMP, SSH, WMI, etc. Scanning Windows and Unix servers Scanning virtual hosts Scanning routing protocols Performing scheduled scans Tracking changes to the network Notifying users of changes to the network Network monitoring tools Some network monitoring tools generate visual maps by automatically scanning the network using net" https://en.wikipedia.org/wiki/Non-contact%20force,"A non-contact force is a force which acts on an object without coming physically in contact with it. The most familiar non-contact force is gravity, which confers weight. In contrast, a contact force is a force which acts on an object coming physically in contact with it. All four known fundamental interactions are non-contact forces: Gravity, the force of attraction that exists among all bodies that have mass. The force exerted on each body by the other through weight is proportional to the mass of the first body times the mass of the second body divided by the square of the distance between them. Electromagnetism is the force that causes the interaction between electrically charged particles; the areas in which this happens are called electromagnetic fields. Examples of this force include: electricity, magnetism, radio waves, microwaves, infrared, visible light, X-rays and gamma rays. Electromagnetism mediates all chemical, biological, electrical and electronic processes. Strong nuclear force: Unlike gravity and electromagnetism, the strong nuclear force is a short distance force that takes place between fundamental particles within a nucleus. It is charge independent and acts equally between a proton and a proton, a neutron and a neutron, and a proton and a neutron. The strong nuclear force is the strongest force in nature; however, its range is small (acting only over distances of the order of 10−15 m). The strong nuclear force mediates both nuclear fission and fusion reactions. Weak nuclear force: The weak nuclear force mediates the β decay of a neutron, in which the neutron decays into a proton and in the process emits a β particle and an uncharged particle called a neutrino. As a result of mediating the β decay process, the weak nuclear force plays a key role in supernovas. Both the strong and weak forces form an important part of quantum mechanics.The Casimir effect could also be thought of as a non-contact force. See also Tension Body force Surface " https://en.wikipedia.org/wiki/Jim%20Al-Khalili,"Jameel Sadik ""Jim"" Al-Khalili (; born 20 September 1962) is an Iraqi-British theoretical physicist, author and broadcaster. He is professor of theoretical physics and chair in the public engagement in science at the University of Surrey. He is a regular broadcaster and presenter of science programmes on BBC radio and television, and a frequent commentator about science in other British media. In 2014, Al-Khalili was named as a RISE (Recognising Inspirational Scientists and Engineers) leader by the UK's Engineering and Physical Sciences Research Council (EPSRC). He was President of Humanists UK between January 2013 and January 2016. Early life and education Al-Khalili was born in Baghdad in 1962. His father was an Iraqi Air Force engineer, and his English mother was a librarian. Al-Khalili settled permanently in the United Kingdom in 1979. After completing (and retaking) his A-levels over three years until 1982, he studied physics at the University of Surrey and graduated with a Bachelor of Science degree in 1986. He stayed on at Surrey to pursue a Doctor of Philosophy degree in nuclear reaction theory, which he obtained in 1989, rather than accepting a job offer from the National Physical Laboratory. Career and research In 1989, Al-Khalili was awarded a Science and Engineering Research Council (SERC) postdoctoral fellowship at University College London, after which he returned to Surrey in 1991, first as a research assistant, then as a lecturer. In 1994, Al-Khalili was awarded an Engineering and Physical Sciences Research Council (EPSRC) Advanced Research Fellowship for five years, during which time he established himself as a leading expert on mathematical models of exotic atomic nuclei. He has published widely in his field. Al-Khalili is a professor of physics at the University of Surrey, where he also holds a chair in the Public Engagement in Science. He has been a trustee (2006–2012) and vice president (2008–2011) of the British Science Association. He a" https://en.wikipedia.org/wiki/Full%20custom,"In integrated circuit design, full-custom is a design methodology in which the layout of each individual transistor on the integrated circuit (IC), and the interconnections between them, are specified. Alternatives to full-custom design include various forms of semi-custom design, such as the repetition of small transistor subcircuits; one such methodology is the use of standard cell libraries (which are themselves designed full-custom). Full-custom design potentially maximizes the performance of the chip, and minimizes its area, but is extremely labor-intensive to implement. Full-custom design is limited to ICs that are to be fabricated in extremely high volumes, notably certain microprocessors and a small number of application-specific integrated circuits (ASICs). As of 2008 the main factor affecting the design and production of ASICs was the high cost of mask sets (number of which is depending on the number of IC layers) and the requisite EDA design tools. The mask sets are required in order to transfer the ASIC designs onto the wafer. See also Electronics design flow" https://en.wikipedia.org/wiki/Index%20of%20logic%20articles," A A System of Logic -- A priori and a posteriori -- Abacus logic -- Abduction (logic) -- Abductive validation -- Academia Analitica -- Accuracy and precision -- Ad captandum -- Ad hoc hypothesis -- Ad hominem -- Affine logic -- Affirming the antecedent -- Affirming the consequent -- Algebraic logic -- Ambiguity -- Analysis -- Analysis (journal) -- Analytic reasoning -- Analytic–synthetic distinction -- Anangeon -- Anecdotal evidence -- Antecedent (logic) -- Antepredicament -- Anti-psychologism -- Antinomy -- Apophasis -- Appeal to probability -- Appeal to ridicule -- Archive for Mathematical Logic -- Arché -- Argument -- Argument by example -- Argument form -- Argument from authority -- Argument map -- Argumentation theory -- Argumentum ad baculum -- Argumentum e contrario -- Ariadne's thread (logic) -- Aristotelian logic -- Aristotle -- Association for Informal Logic and Critical Thinking -- Association for Logic, Language and Information -- Association for Symbolic Logic -- Attacking Faulty Reasoning -- Australasian Association for Logic -- Axiom -- Axiom independence -- Axiom of reducibility -- Axiomatic system -- Axiomatization -- B Backward chaining -- Barcan formula -- Begging the question -- Begriffsschrift -- Belief -- Belief bias -- Belief revision -- Benson Mates -- Bertrand Russell Society -- Biconditional elimination -- Biconditional introduction -- Bivalence and related laws -- Blue and Brown Books -- Boole's syllogistic -- Boolean algebra (logic) -- Boolean algebra (structure) -- Boolean network -- C Canonical form -- Canonical form (Boolean algebra) -- Cartesian circle -- Case-based reasoning -- Categorical logic -- Categories (Aristotle) -- Categories (Peirce) -- Category mistake -- Catuṣkoṭi -- Circular definition -- Circular reasoning -- Circular reference -- Circular reporting -- Circumscription (logic) -- Circumscription (taxonomy) -- Classical logic -- Clocked logic -- Cognitive bias -- Cointerpretability -- Colorless green ideas sleep fu" https://en.wikipedia.org/wiki/Biochemical%20cascade,"A biochemical cascade, also known as a signaling cascade or signaling pathway, is a series of chemical reactions that occur within a biological cell when initiated by a stimulus. This stimulus, known as a first messenger, acts on a receptor that is transduced to the cell interior through second messengers which amplify the signal and transfer it to effector molecules, causing the cell to respond to the initial stimulus. Most biochemical cascades are series of events, in which one event triggers the next, in a linear fashion. At each step of the signaling cascade, various controlling factors are involved to regulate cellular actions, in order to respond effectively to cues about their changing internal and external environments. An example would be the coagulation cascade of secondary hemostasis which leads to fibrin formation, and thus, the initiation of blood coagulation. Another example, sonic hedgehog signaling pathway, is one of the key regulators of embryonic development and is present in all bilaterians. Signaling proteins give cells information to make the embryo develop properly. When the pathway malfunctions, it can result in diseases like basal cell carcinoma. Recent studies point to the role of hedgehog signaling in regulating adult stem cells involved in maintenance and regeneration of adult tissues. The pathway has also been implicated in the development of some cancers. Drugs that specifically target hedgehog signaling to fight diseases are being actively developed by a number of pharmaceutical companies. Introduction Signaling cascades Cells require a full and functional cellular machinery to live. When they belong to complex multicellular organisms, they need to communicate among themselves and work for symbiosis in order to give life to the organism. These communications between cells triggers intracellular signaling cascades, termed signal transduction pathways, that regulate specific cellular functions. Each signal transduction occurs with a p" https://en.wikipedia.org/wiki/Biasing,"In electronics, biasing is the setting of DC (direct current) operating conditions (current and voltage) of an electronic component that processes time-varying signals. Many electronic devices, such as diodes, transistors and vacuum tubes, whose function is processing time-varying (AC) signals, also require a steady (DC) current or voltage at their terminals to operate correctly. This current or voltage is called bias. The AC signal applied to them is superposed on this DC bias current or voltage. The operating point of a device, also known as bias point, quiescent point, or Q-point, is the DC voltage or current at a specified terminal of an active device (a transistor or vacuum tube) with no input signal applied. A bias circuit is a portion of the device's circuit that supplies this steady current or voltage. Overview In electronics, 'biasing' usually refers to a fixed DC voltage or current applied to a terminal of an electronic component such as a diode, transistor or vacuum tube in a circuit in which AC signals are also present, in order to establish proper operating conditions for the component. For example, a bias voltage is applied to a transistor in an electronic amplifier to allow the transistor to operate in a particular region of its transconductance curve. For vacuum tubes, a grid bias voltage is often applied to the grid electrodes for the same reason. In magnetic tape recording, the term bias is also used for a high-frequency signal added to the audio signal and applied to the recording head, to improve the quality of the recording on the tape. This is called tape bias. Importance in linear circuits Linear circuits involving transistors typically require specific DC voltages and currents for correct operation, which can be achieved using a biasing circuit. As an example of the need for careful biasing, consider a transistor amplifier. In linear amplifiers, a small input signal gives a larger output signal without any change in shape (low distortion" https://en.wikipedia.org/wiki/PC/TCP%20Packet%20Driver,"PC/TCP Packet Driver is a networking API for MS-DOS, PC DOS, and later x86 DOS implementations such as DR-DOS, FreeDOS, etc. It implements the lowest levels of a TCP/IP stack, where the remainder is typically implemented either by terminate-and-stay-resident drivers or as a library linked into an application program. It was invented in 1983 at MIT's Lab for Computer Science (CSR/CSC group under Jerry Saltzer and David D. Clark), and was commercialized in 1986 by FTP Software. A packet driver uses an x86 interrupt number (INT) between The number used is detected at runtime, it is most commonly 60h but may be changed to avoid application programs which use fixed interrupts for internal communications. The interrupt vector is used as a pointer (4-bytes little endian) to the address of a possible interrupt handler. If the null-terminated ASCII text string ""PKT DRVR"" (2 spaces in the middle!) is found within the first 12-bytes -- more specifically in bytes 3 through 11 -- immediately following the entry point then a driver has been located. Packet drivers can implement many different network interfaces, including Ethernet, Token Ring, RS-232, Arcnet, and X.25. Functions Drivers WinPKT is a driver that enables use of packet drivers under Microsoft Windows that moves around applications in memory. W3C507 is a DLL to packet driver for the Microsoft Windows environment. Support for Ethernet alike network interface over (using 8250 UART), CSLIP, , IPX, Token Ring, LocalTalk, ARCNET. See also Crynwr Collection - alternative free packet driver collection Network Driver Interface Specification (NDIS) - developed by Microsoft and 3Com, free wrappers Open Data-Link Interface (ODI) - developed by Apple and Novell Universal Network Device Interface (UNDI) - used by Intel PXE Uniform Driver Interface (UDI) - defunct Preboot Execution Environment - network boot by Intel, widespread" https://en.wikipedia.org/wiki/Sclerobiont,"Sclerobionts are collectively known as organisms living in or on any kind of hard substrate (Taylor and Wilson, 2003). A few examples of sclerobionts include Entobia borings, Gastrochaenolites borings, Talpina borings, serpulids, encrusting oysters, encrusting foraminiferans, Stomatopora bryozoans, and “Berenicea” bryozoans. See also Bioerosion" https://en.wikipedia.org/wiki/Proof%20by%20exhaustion,"Proof by exhaustion, also known as proof by cases, proof by case analysis, complete induction or the brute force method, is a method of mathematical proof in which the statement to be proved is split into a finite number of cases or sets of equivalent cases, and where each type of case is checked to see if the proposition in question holds. This is a method of direct proof. A proof by exhaustion typically contains two stages: A proof that the set of cases is exhaustive; i.e., that each instance of the statement to be proved matches the conditions of (at least) one of the cases. A proof of each of the cases. The prevalence of digital computers has greatly increased the convenience of using the method of exhaustion (e.g., the first computer-assisted proof of four color theorem in 1976), though such approaches can also be challenged on the basis of mathematical elegance. Expert systems can be used to arrive at answers to many of the questions posed to them. In theory, the proof by exhaustion method can be used whenever the number of cases is finite. However, because most mathematical sets are infinite, this method is rarely used to derive general mathematical results. In the Curry–Howard isomorphism, proof by exhaustion and case analysis are related to ML-style pattern matching. Example Proof by exhaustion can be used to prove that if an integer is a perfect cube, then it must be either a multiple of 9, 1 more than a multiple of 9, or 1 less than a multiple of 9. Proof: Each perfect cube is the cube of some integer n, where n is either a multiple of 3, 1 more than a multiple of 3, or 1 less than a multiple of 3. So these three cases are exhaustive: Case 1: If n = 3p, then n3 = 27p3, which is a multiple of 9. Case 2: If n = 3p + 1, then n3 = 27p3 + 27p2 + 9p + 1, which is 1 more than a multiple of 9. For instance, if n = 4 then n3 = 64 = 9×7 + 1. Case 3: If n = 3p − 1, then n3 = 27p3 − 27p2 + 9p − 1, which is 1 less than a multiple of 9. For instance, if n = 5 th" https://en.wikipedia.org/wiki/Northbound%20interface,"In computer networking and computer architecture, a northbound interface of a component is an interface that allows the component to communicate with a higher level component, using the latter component's southbound interface. The northbound interface conceptualizes the lower level details (e.g., data or functions) used by, or in, the component, allowing the component to interface with higher level layers. In architectural overviews, the northbound interface is normally drawn at the top of the component it is defined in; hence the name northbound interface. A southbound interface decomposes concepts in the technical details, mostly specific to a single component of the architecture. Southbound interfaces are drawn at the bottom of an architectural overview. Typical use A northbound interface is typically an output-only interface (as opposed to one that accepts user input) found in carrier-grade network and telecommunications network elements. The languages or protocols commonly used include SNMP and TL1. For example, a device that is capable of sending out syslog messages but that is not configurable by the user is said to implement a northbound interface. Other examples include SMASH, IPMI, WSMAN, and SOAP. The term is also important for software-defined networking (SDN), to facilitate communication between the physical devices, the SDN software and applications running on the network." https://en.wikipedia.org/wiki/Buchdahl%27s%20theorem,"In general relativity, Buchdahl's theorem, named after Hans Adolf Buchdahl, makes more precise the notion that there is a maximal sustainable density for ordinary gravitating matter. It gives an inequality between the mass and radius that must be satisfied for static, spherically symmetric matter configurations under certain conditions. In particular, for areal radius , the mass must satisfy where is the gravitational constant and is the speed of light. This inequality is often referred to as Buchdahl's bound. The bound has historically also been called Schwarzschild's limit as it was first noted by Karl Schwarzschild to exist in the special case of a constant density fluid. However, this terminology should not be confused with the Schwarzschild radius which is notably smaller than the radius at the Buchdahl bound. Theorem Given a static, spherically symmetric solution to the Einstein equations (without cosmological constant) with matter confined to areal radius that behaves as a perfect fluid with a density that does not increase outwards. (An areal radius corresponds to a sphere of surface area . In curved spacetime the proper radius of such a sphere is not necessarily .) Assumes in addition that the density and pressure cannot be negative. The mass of this solution must satisfy For his proof of the theorem, Buchdahl uses the Tolman-Oppenheimer-Volkoff (TOV) equation. Significance The Buchdahl theorem is useful when looking for alternatives to black holes. Such attempts are often inspired by the information paradox; a way to explain (part of) the dark matter; or to criticize that observations of black holes are based on excluding known astrophysical alternatives (such as neutron stars) rather than direct evidence. However, to provide a viable alternative it is sometimes needed that the object should be extremely compact and in particular violate the Buchdahl inequality. This implies that one of the assumptions of Buchdahl's theorem must be invalid. A " https://en.wikipedia.org/wiki/List%20of%20scientific%20publications%20by%20Albert%20Einstein,"Albert Einstein (1879–1955) was a renowned theoretical physicist of the 20th century, best known for his theories of special relativity and general relativity. He also made important contributions to statistical mechanics, especially his treatment of Brownian motion, his resolution of the paradox of specific heats, and his connection of fluctuations and dissipation. Despite his reservations about its interpretation, Einstein also made seminal contributions to quantum mechanics and, indirectly, quantum field theory, primarily through his theoretical studies of the photon. Einstein's scientific publications are listed below in four tables: journal articles, book chapters, books and authorized translations. Each publication is indexed in the first column by its number in the Schilpp bibliography (Albert Einstein: Philosopher–Scientist, pp. 694–730) and by its article number in Einstein's Collected Papers. Complete references for these two bibliographies may be found below in the Bibliography section. The Schilpp numbers are used for cross-referencing in the Notes (the final column of each table), since they cover a greater time period of Einstein's life at present. The English translations of titles are generally taken from the published volumes of the Collected Papers. For some publications, however, such official translations are not available; unofficial translations are indicated with a § superscript. Although the tables are presented in chronological order by default, each table can be re-arranged in alphabetical order for any column by the reader clicking on the arrows at the top of that column. For illustration, to re-order a table by subject—e.g., to group together articles that pertain to ""General relativity"" or ""Specific heats""—one need only click on the arrows in the ""Classification and Notes"" columns. To print out the re-sorted table, one may print it directly by using the web-browser Print option; the ""Printable version"" link at the left gives o" https://en.wikipedia.org/wiki/Computer%20architecture%20simulator,"A computer architecture simulator is a program that simulates the execution of computer architecture. Computer architecture simulators are used for the following purposes: Lowering cost by evaluating hardware designs without building physical hardware systems. Enabling access to unobtainable hardware. Increasing the precision and volume of computer performance data. Introducing abilities that are not normally possible on real hardware such as running code backwards when an error is detected or running in faster-than-real time. Categories Computer architecture simulators can be classified into many different categories depending on the context. Scope: Microarchitecture simulators model the microprocessor and its components. Full-system simulators also model the processor, memory systems, and I/O devices. Detail: Functional simulators, such as instruction set simulators, achieve the same function as modeled components. They can be simulated faster if timing is not considered. Timing simulators are functional simulators that also reproduce timing. Timing simulators can be further categorized into digital cycle-accurate and analog sub-cycle simulators. Workload: Trace-driven simulators (also called event-driven simulators) react to pre-recorded streams of instructions with some fixed input. Execution-driven simulators allow dynamic change of instructions to be executed depending on different input data. Full-system simulators A full-system simulator is execution-driven architecture simulation at such a level of detail that complete software stacks from real systems can run on the simulator without any modification. A full system simulator provides virtual hardware that is independent of the nature of the host computer. The full-system model typically includes processor cores, peripheral devices, memories, interconnection buses, and network connections. Emulators are full system simulators that imitate obsolete hardware instead of under development hardware. " https://en.wikipedia.org/wiki/Valencia%20Koomson,"Valencia Joyner Koomson is an associate professor in the Department of Electrical and Computer Engineering and adjunct professor in the Department of Computer Science at the Tufts University School of Engineering. Koomson is also the principal investigator for the Advanced Integrated Circuits and Systems Lab at Tufts University. Background Koomson was born in Washington, DC, and graduated from Benjamin Banneker Academic High School. Her parents, Otis and Vernese Joyner, moved to Washington DC during the Great Migration after living for years as sharecroppers in Wilson County, North Carolina. Her family history can be traced back to the antebellum period. Her oldest known relative is Hagar Atkinson, an enslaved African woman whose name is recorded in the will of a plantation owner in Johnston County, North Carolina. Career Koomson attended the Massachusetts Institute of Technology, graduating with a BS in Electrical Engineering and Computer Science in 1998 and a Masters of Engineering in 1999. Koomson subsequently earned her Master of Philosophy from the University of Cambridge in 2000, followed by her PhD in Electrical Engineering from the same institution in 2003. Koomson was an adjunct professor at Howard University from 2004 to 2005, and during that period was a Senior Research Engineer at the University of Southern California's Information Sciences Institute (USC/ISI). She was a Visiting Professor at Rensselaer Polytechnic Institute and Boston University in 2008 and 2013, respectively. Koomson joined Tufts University in 2005 as an assistant professor, and became an associate professor in 2011. In 2020, Koomson was named an MLK Visiting Professor at MIT for the academic year 2020/2021. Research Koomson's research lies at the intersection of biology, medicine, and electrical engineering. Her interests are in nanoelectronic circuits, systems for wearable and implantable medical devices, semiconductors, and advanced nano-/microfluidic systems to probe int" https://en.wikipedia.org/wiki/Trademark%20%28computer%20security%29,"A Trademark in computer security is a contract between code that verifies security properties of an object and code that requires that an object have certain security properties. As such it is useful in ensuring secure information flow. In object-oriented languages, trademarking is analogous to signing of data but can often be implemented without cryptography. Operations A trademark has two operations: ApplyTrademark!(object) This operation is analogous to the private key in a digital signature process, so must not be exposed to untrusted code. It should only be applied to immutable objects, and makes sure that when VerifyTrademark? is called on the same value that it returns true. VerifyTrademark?(object) This operation is analogous to the public key in a digital signature process, so can be exposed to untrusted code. Returns true if-and-only-if, ApplyTrademark! has been called with the given object. Relationship to taint checking Trademarking is the inverse of taint checking. Whereas taint checking is a black-listing approach that says that certain objects should not be trusted, trademarking is a white-listing approach that marks certain objects as having certain security properties. Relationship to memoization The apply trademark can be thought of as memoizing a verification process. Relationship to contract verification Sometimes a verification process does not need to be done because the fact that a value has a particular security property can be verified statically. In this case, the apply property is being used to assert that an object was produced by code that has been formally verified to only produce outputs with the particular security property. Example One way of applying a trademark in java: public class Trademark { /* Use a weak identity hash set instead if a.equals(b) && check(a) does not imply check(b). */ private final WeakHashSet trademarked = ...; public synchronized void apply(Object o) { tradem" https://en.wikipedia.org/wiki/Calculator%20input%20methods,"There are various ways in which calculators interpret keystrokes. These can be categorized into two main types: On a single-step or immediate-execution calculator, the user presses a key for each operation, calculating all the intermediate results, before the final value is shown. On an expression or formula calculator, one types in an expression and then presses a key, such as ""="" or ""Enter"", to evaluate the expression. There are various systems for typing in an expression, as described below. Immediate execution The immediate execution mode of operation (also known as single-step, algebraic entry system (AES) or chain calculation mode) is commonly employed on most general-purpose calculators. In most simple four-function calculators, such as the Windows calculator in Standard mode and those included with most early operating systems, each binary operation is executed as soon as the next operator is pressed, and therefore the order of operations in a mathematical expression is not taken into account. Scientific calculators, including the Scientific mode in the Windows calculator and most modern software calculators, have buttons for brackets and can take order of operation into account. Also, for unary operations, like √ or x2, the number is entered first, then the operator; this is largely because the display screens on these kinds of calculators are generally composed entirely of seven-segment characters and thus capable of displaying only numbers, not the functions associated with them. This mode of operation also makes it impossible to change the expression being input without clearing the display entirely. The first two examples have been given twice. The first version is for simple calculators, showing how it is necessary to rearrange operands in order to get the correct result. The second version is for scientific calculators, where operator precedence is observed. Different forms of operator precedence schemes exist. In the algebraic entry system with" https://en.wikipedia.org/wiki/Ship%20model%20basin,"A ship model basin is a basin or tank used to carry out hydrodynamic tests with ship models, for the purpose of designing a new (full sized) ship, or refining the design of a ship to improve the ship's performance at sea. It can also refer to the organization (often a company) that owns and operates such a facility. An engineering firm acts as a contractor to the relevant shipyards, and provides hydrodynamic model tests and numerical calculations to support the design and development of ships and offshore structures. History The eminent English engineer William Froude published a series of influential papers on ship designs for maximising stability in the 1860s. The Institution of Naval Architects eventually commissioned him to identify the most efficient hull shape. He validated his theoretical models with extensive empirical testing, using scale models for the different hull dimensions. He established a formula (now known as the Froude number) by which the results of small-scale tests could be used to predict the behaviour of full-sized hulls. He built a sequence of 3, 6 and (shown in the picture) 12 foot scale models and used them in towing trials to establish resistance and scaling laws. His experiments were later vindicated in full-scale trials conducted by the Admiralty and as a result the first ship model basin was built, at public expense, at his home in Torquay. Here he was able to combine mathematical expertise with practical experimentation to such good effect that his methods are still followed today. Inspired by Froude's successful work, shipbuilding company William Denny and Brothers completed the world's first commercial example of a ship model basin in 1883. The facility was used to test models of a variety of vessels and explored various propulsion methods, including propellers, paddles and vane wheels. Experiments were carried out on models of the Denny-Brown stabilisers and the Denny hovercraft to gauge their feasibility. Tank staff also carr" https://en.wikipedia.org/wiki/Frequency%20band,"A frequency band is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term may refer to a radio band (such as wireless communication standards set by the International Telecommunication Union) or an interval of some other spectrum. The frequency range of a system is the range over which it is considered to provide satisfactory performance, such as a useful level of signal with acceptable distortion characteristics. A listing of the upper and lower limits of frequency limits for a system is not useful without a criterion for what the range represents. Many systems are characterized by the range of frequencies to which they respond. For example: Musical instruments produce different ranges of notes within the hearing range. The electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, and each of these ranges can in turn be divided into smaller ranges. A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. A frequency band may represent one communication channel or be subdivided into many. Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation. See also" https://en.wikipedia.org/wiki/Classification%20theorem,"In mathematics, a classification theorem answers the classification problem ""What are the objects of a given type, up to some equivalence?"". It gives a non-redundant enumeration: each object is equivalent to exactly one class. A few issues related to classification are the following. The equivalence problem is ""given two objects, determine if they are equivalent"". A complete set of invariants, together with which invariants are solves the classification problem, and is often a step in solving it. A (together with which invariants are realizable) solves both the classification problem and the equivalence problem. A canonical form solves the classification problem, and is more data: it not only classifies every class, but provides a distinguished (canonical) element of each class. There exist many classification theorems in mathematics, as described below. Geometry Classification of Euclidean plane isometries Classification theorems of surfaces Classification of two-dimensional closed manifolds Enriques–Kodaira classification of algebraic surfaces (complex dimension two, real dimension four) Nielsen–Thurston classification which characterizes homeomorphisms of a compact surface Thurston's eight model geometries, and the geometrization conjecture Berger classification Classification of Riemannian symmetric spaces Classification of 3-dimensional lens spaces Classification of manifolds Algebra Classification of finite simple groups Classification of Abelian groups Classification of Finitely generated abelian group Classification of Rank 3 permutation group Classification of 2-transitive permutation groups Artin–Wedderburn theorem — a classification theorem for semisimple rings Classification of Clifford algebras Classification of low-dimensional real Lie algebras Classification of Simple Lie algebras and groups Classification of simple complex Lie algebras Classification of simple real Lie algebras Classification of centerless simple Lie gro" https://en.wikipedia.org/wiki/Software%20configuration%20management,"In software engineering, software configuration management (SCM or S/W CM) is the task of tracking and controlling changes in the software, part of the larger cross-disciplinary field of configuration management. SCM practices include revision control and the establishment of baselines. If something goes wrong, SCM can determine the ""what, when, why and who"" of the change. If a configuration is working well, SCM can determine how to replicate it across many hosts. The acronym ""SCM"" is also expanded as source configuration management process and software change and configuration management. However, ""configuration"" is generally understood to cover changes typically made by a system administrator. Purposes The goals of SCM are generally: Configuration identification - Identifying configurations, configuration items and baselines. Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline. Configuration status accounting - Recording and reporting all the necessary information on the status of the development process. Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals. Build management - Managing the process and tools used for builds. Process management - Ensuring adherence to the organization's development process. Environment management - Managing the software and hardware that host the system. Teamwork - Facilitate team interactions related to the process. Defect tracking - Making sure every defect has traceability back to the source. With the introduction of cloud computing and DevOps the purposes of SCM tools have become merged in some cases. The SCM tools themselves have become virtual appliances that can be instantiated as v" https://en.wikipedia.org/wiki/Basics%20of%20blue%20flower%20colouration,"Blue flower colour was always associated with something unusual and desired. Blue roses especially were assumed to be a dream that cannot be realised. Blue colour in flower petals is caused by anthocyanins, which are members of flavonoid class metabolites. We can diversify three main classes of anthocyanin pigments: cyaniding type (two hydroxyl groups in the B-ring) responsible for red coloration, pelargonidin type (one hydroxyl group in the B-ring) responsible for orange colour and delphinidin type (three hydroxyl groups in the B-ring) responsible for violet/blue flower and fruits coloration. The main difference in the structure of listed anthocyanins type is the number of hydroxyl groups in the B-ring of the anthocyanin. Nevertheless, in the monomeric state anthocyanins never show blue colour in the weak acidic and neutral pH. The mechanism of blue colour formation are very complicated in most cases, presence of delphinidin type pigments is not sufficient, great role play also the pH and the formation of complexes of anthocyanins with flavones and metal ions. Mechanisms Self-association is correlated with the anthocyanin concentration. When concentration is higher we can observe change in the absorbance maximum and increase of colour intensity. Molecules of anthocyanins associate together what results in stronger and darker colour. Co-pigmentation stabilizes and gives protection to anthocyanins in the complexes. Co-pigments are colourless or have slightly yellow colour. Co-pigments usually are flavonoids (flavones, flavonols, flavanons, flavanols), other polyphenols, alkaloids, amino acids or organic acids. The most efficient co-pigments are flavonols like rutin or quercetin and phenolic acids like sinapic acid or ferulic acid. Association of co-pigment with anthocyanin causes bathochromic effect, shift in absorption maximum to higher wavelength, in result we can observe change of the colour from red to blue. This phenomenon is also called bluing effect. We can" https://en.wikipedia.org/wiki/Natural%20landscape,"A natural landscape is the original landscape that exists before it is acted upon by human culture. The natural landscape and the cultural landscape are separate parts of the landscape. However, in the 21st century, landscapes that are totally untouched by human activity no longer exist, so that reference is sometimes now made to degrees of naturalness within a landscape. In Silent Spring (1962) Rachel Carson describes a roadside verge as it used to look: ""Along the roads, laurel, viburnum and alder, great ferns and wildflowers delighted the traveler’s eye through much of the year"" and then how it looks now following the use of herbicides: ""The roadsides, once so attractive, were now lined with browned and withered vegetation as though swept by fire"". Even though the landscape before it is sprayed is biologically degraded, and may well contains alien species, the concept of what might constitute a natural landscape can still be deduced from the context. The phrase ""natural landscape"" was first used in connection with landscape painting, and landscape gardening, to contrast a formal style with a more natural one, closer to nature. Alexander von Humboldt (1769 – 1859) was to further conceptualize this into the idea of a natural landscape separate from the cultural landscape. Then in 1908 geographer Otto Schlüter developed the terms original landscape (Urlandschaft) and its opposite cultural landscape (Kulturlandschaft) in an attempt to give the science of geography a subject matter that was different from the other sciences. An early use of the actual phrase ""natural landscape"" by a geographer can be found in Carl O. Sauer's paper ""The Morphology of Landscape"" (1925). Origins of the term The concept of a natural landscape was first developed in connection with landscape painting, though the actual term itself was first used in relation to landscape gardening. In both cases it was used to contrast a formal style with a more natural one, that is closer to nature. Chu" https://en.wikipedia.org/wiki/Storage%20organ,"A storage organ is a part of a plant specifically modified for storage of energy (generally in the form of carbohydrates) or water. Storage organs often grow underground, where they are better protected from attack by herbivores. Plants that have an underground storage organ are called geophytes in the Raunkiær plant life-form classification system. Storage organs often, but not always, act as perennating organs which enable plants to survive adverse conditions (such as cold, excessive heat, lack of light or drought). Relationship to perennating organ Storage organs may act as perennating organs ('perennating' as in perennial, meaning ""through the year"", used in the sense of continuing beyond the year and in due course lasting for multiple years). These are used by plants to survive adverse periods in the plant's life-cycle (e.g. caused by cold, excessive heat, lack of light or drought). During these periods, parts of the plant die and then when conditions become favourable again, re-growth occurs from buds in the perennating organs. For example, geophytes growing in woodland under deciduous trees (e.g. bluebells, trilliums) die back to underground storage organs during summer when tree leaf cover restricts light and water is less available. However, perennating organs need not be storage organs. After losing their leaves, deciduous trees grow them again from 'resting buds', which are the perennating organs of phanerophytes in the Raunkiær classification, but which do not specifically act as storage organs. Equally, storage organs need not be perennating organs. Many succulents have leaves adapted for water storage, which they retain in adverse conditions. Underground storage organ In common parlance, underground storage organs may be generically called roots, tubers, or bulbs, but to the botanist there is more specific technical nomenclature: True roots: Storage taproot — e.g. carrot Tuberous root or root tuber – e.g. Dahlia Modified stems: Bulb (a shor" https://en.wikipedia.org/wiki/Bioelectrospray,"Bio-electrospraying is a technology that enables the deposition of living cells on various targets with a resolution that depends on cell size and not on the jetting phenomenon. It is envisioned that ""unhealthy cells would draw a different charge at the needle from healthy ones, and could be identified by the mass spectrometer"", with tremendous implications in the health care industry. The early versions of bio-electrosprays were employed in several areas of research, most notably self-assembly of carbon nanotubes. Although the self-assembly mechanism is not clear yet, ""elucidating electrosprays as a competing nanofabrication route for forming self-assemblies with a wide range of nanomaterials in the nanoscale for top-down based bottom-up assembly of structures."" Future research may reveal important interactions between migrating cells and self-assembled nanostructures. Such nano-assemblies formed by means of this top-down approach could be explored as a bottom-up methodology for encouraging cell migration to those architectures for forming cell patterns to nano-electronics, which are a few examples, respectively. After initial exploration with a single protein, increasingly complex systems were studied by bio-electrosprays. These include, but are not limited to, neuronal cells, stem cells, and even whole embryos. The potential of the method was demonstrated by investigating cytogenetic and physiological changes of human lymphocyte cells as well as conducting comprehensive genetic, genomic and physiological state studies of human cells and cells of the model yeast Saccharomyces cerevisiae. See also Electrospray ionization" https://en.wikipedia.org/wiki/List%20of%20real%20analysis%20topics,"This is a list of articles that are considered real analysis topics. General topics Limits Limit of a sequence Subsequential limit – the limit of some subsequence Limit of a function (see List of limits for a list of limits of common functions) One-sided limit – either of the two limits of functions of real variables x, as x approaches a point from above or below Squeeze theorem – confirms the limit of a function via comparison with two other functions Big O notation – used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions Sequences and series (see also list of mathematical series) Arithmetic progression – a sequence of numbers such that the difference between the consecutive terms is constant Generalized arithmetic progression – a sequence of numbers such that the difference between consecutive terms can be one of several possible constants Geometric progression – a sequence of numbers such that each consecutive term is found by multiplying the previous one by a fixed non-zero number Harmonic progression – a sequence formed by taking the reciprocals of the terms of an arithmetic progression Finite sequence – see sequenceInfinite sequence – see sequenceDivergent sequence – see limit of a sequence or divergent seriesConvergent sequence – see limit of a sequence or convergent seriesCauchy sequence – a sequence whose elements become arbitrarily close to each other as the sequence progresses Convergent series – a series whose sequence of partial sums converges Divergent series – a series whose sequence of partial sums diverges Power series – a series of the form Taylor series – a series of the form Maclaurin series – see Taylor seriesBinomial series – the Maclaurin series of the function f given by f(x) = (1 + x) α Telescoping series Alternating series Geometric series Divergent geometric series Harmonic series Fourier series Lambert series Summation methods Ce" https://en.wikipedia.org/wiki/Imaging%20biomarker,"An imaging biomarker is a biologic feature, or biomarker detectable in an image. In medicine, an imaging biomarker is a feature of an image relevant to a patient's diagnosis. For example, a number of biomarkers are frequently used to determine risk of lung cancer. First, a simple lesion in the lung detected by X-ray, CT, or MRI can lead to the suspicion of a neoplasm. The lesion itself serves as a biomarker, but the minute details of the lesion serve as biomarkers as well, and can collectively be used to assess the risk of neoplasm. Some of the imaging biomarkers used in lung nodule assessment include size, spiculation, calcification, cavitation, location within the lung, rate of growth, and rate of metabolism. Each piece of information from the image represents a probability. Spiculation increases the probability of the lesion being cancer. A slow rate of growth indicates benignity. These variables can be added to the patient's history, physical exam, laboratory tests, and pathology to reach a proposed diagnosis. Imaging biomarkers can be measured using several techniques, such as CT, electroencephalography, magnetoencephalography, and MRI. History Imaging biomarkers are as old as the X-ray itself. A feature of a radiograph that represent some kind of pathology was first coined ""Roentgen signs"" after Wilhelm Röntgen, the discoverer of the X-ray. As the field of medical imaging developed and expanded to include numerous imaging modalities, imaging biomarkers have grown as well, in both quantity and complexity as finally in chemical imaging. Quantitative imaging biomarkers A quantitative imaging biomarkers (QIB) is an objective characteristic derived from an in vivo image measured on a ratio or interval scale as indicators of normal biological processes, pathogenic processes or a response to a therapeutic intervention. advantage of QIB's over qualitative imaging biomarkers is that they are better suited to be used for follow-up of patients or in clinical trials" https://en.wikipedia.org/wiki/Signal,"In signal processing, a signal is a function that conveys information about a phenomenon. Any quantity that can vary over space or time can be used as a signal to share messages between observers. The IEEE Transactions on Signal Processing includes audio, video, speech, image, sonar, and radar as examples of signals. A signal may also be defined as observable change in a quantity over space or time (a time series), even if it does not carry information. In nature, signals can be actions done by an organism to alert other organisms, ranging from the release of plant chemicals to warn nearby plants of a predator, to sounds or motions made by animals to alert other animals of food. Signaling occurs in all organisms even at cellular levels, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability of animals to communicate with each other by developing ways of signaling. In human engineering, signals are typically provided by a sensor, and often the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, and a speaker does the reverse. Another important property of a signal is its entropy or information content. Information theory serves as the formal study of signals and their content. The information of a signal is often accompanied by noise, which primarily refers to unwanted modifications of signals, but is often extended to include unwanted signals conflicting with desired signals (crosstalk). The reduction of noise is covered in part under the heading of signal integrity. The separation of desired signals from background noise is the field of signal recovery, one branch of which is estimation theory, a probabilistic approach to suppressing random disturbances. Engineering disciplines such as electrical engineering have advanced the design, study, and implementation of systems involving t" https://en.wikipedia.org/wiki/List%20of%20set%20identities%20and%20relations,"This article lists mathematical properties and laws of sets, involving the set-theoretic operations of union, intersection, and complementation and the relations of set equality and set inclusion. It also provides systematic procedures for evaluating expressions, and performing calculations, involving these operations and relations. The binary operations of set union () and intersection () satisfy many identities. Several of these identities or ""laws"" have well established names. Notation Throughout this article, capital letters (such as and ) will denote sets. On the left hand side of an identity, typically, will be the eft most set, will be the iddle set, and will be the ight most set. This is to facilitate applying identities to expressions that are complicated or use the same symbols as the identity. For example, the identity may be read as: Elementary set operations For sets and define: and where the is sometimes denoted by and equals: One set is said to another set if Sets that do not intersect are said to be . The power set of is the set of all subsets of and will be denoted by Universe set and complement notation The notation may be used if is a subset of some set that is understood (say from context, or because it is clearly stated what the superset is). It is emphasized that the definition of depends on context. For instance, had been declared as a subset of with the sets and not necessarily related to each other in any way, then would likely mean instead of If it is needed then unless indicated otherwise, it should be assumed that denotes the universe set, which means that all sets that are used in the formula are subsets of In particular, the complement of a set will be denoted by where unless indicated otherwise, it should be assumed that denotes the complement of in (the universe) One subset involved Assume Identity: Definition: is called a left identity element of a binary operator if fo" https://en.wikipedia.org/wiki/Mean%20inter-particle%20distance,"Mean inter-particle distance (or mean inter-particle separation) is the mean distance between microscopic particles (usually atoms or molecules) in a macroscopic body. Ambiguity From the very general considerations, the mean inter-particle distance is proportional to the size of the per-particle volume , i.e., where is the particle density. However, barring a few simple cases such as the ideal gas model, precise calculations of the proportionality factor are impossible analytically. Therefore, approximate expressions are often used. One such an estimation is the Wigner–Seitz radius which corresponds to the radius of a sphere having per-particle volume . Another popular definition is , corresponding to the length of the edge of the cube with the per-particle volume . The two definitions differ by a factor of approximately , so one has to exercise care if an article fails to define the parameter exactly. On the other hand, it is often used in qualitative statements where such a numeric factor is either irrelevant or plays an insignificant role, e.g., ""a potential energy ... is proportional to some power n of the inter-particle distance r"" (Virial theorem) ""the inter-particle distance is much larger than the thermal de Broglie wavelength"" (Kinetic theory) Ideal gas Nearest neighbor distribution We want to calculate probability distribution function of distance to the nearest neighbor (NN) particle. (The problem was first considered by Paul Hertz; for a modern derivation see, e.g.,.) Let us assume particles inside a sphere having volume , so that . Note that since the particles in the ideal gas are non-interacting, the probability to find a particle at a certain distance from another particle is the same as probability to find a particle at the same distance from any other point; we shall use the center of the sphere. An NN particle at distance means exactly one of the particles resides at that distance while the rest particles are at larger distanc" https://en.wikipedia.org/wiki/Genetic%20pollution,"Genetic pollution is a term for uncontrolled gene flow into wild populations. It is defined as ""the dispersal of contaminated altered genes from genetically engineered organisms to natural organisms, esp. by cross-pollination"", but has come to be used in some broader ways. It is related to the population genetics concept of gene flow, and genetic rescue, which is genetic material intentionally introduced to increase the fitness of a population. It is called genetic pollution when it negatively impacts the fitness of a population, such as through outbreeding depression and the introduction of unwanted phenotypes which can lead to extinction. Conservation biologists and conservationists have used the term to describe gene flow from domestic, feral, and non-native species into wild indigenous species, which they consider undesirable. They promote awareness of the effects of introduced invasive species that may ""hybridize with native species, causing genetic pollution"". In the fields of agriculture, agroforestry and animal husbandry, genetic pollution is used to describe gene flows between genetically engineered species and wild relatives. The use of the word ""pollution"" is meant to convey the idea that mixing genetic information is bad for the environment, but because the mixing of genetic information can lead to a variety of outcomes, ""pollution"" may not always be the most accurate descriptor. Gene flow to wild population Some conservation biologists and conservationists have used genetic pollution for a number of years as a term to describe gene flow from a non-native, invasive subspecies, domestic, or genetically-engineered population to a wild indigenous population. Importance The introduction of genetic material into the gene pool of a population by human intervention can have both positive and negative effects on populations. When genetic material is intentionally introduced to increase the fitness of a population, this is called genetic rescue. When genet" https://en.wikipedia.org/wiki/List%20of%20baryons,"Baryons are composite particles made of three quarks, as opposed to mesons, which are composite particles made of one quark and one antiquark. Baryons and mesons are both hadrons, which are particles composed solely of quarks or both quarks and antiquarks. The term baryon is derived from the Greek ""βαρύς"" (barys), meaning ""heavy"", because, at the time of their naming, it was believed that baryons were characterized by having greater masses than other particles that were classed as matter. Until a few years ago, it was believed that some experiments showed the existence of pentaquarks – baryons made of four quarks and one antiquark. Prior to 2006 the particle physics community as a whole did not view the existence of pentaquarks as likely. On 13 July 2015, the LHCb collaboration at CERN reported results consistent with pentaquark states in the decay of bottom lambda baryons (Λ). Since baryons are composed of quarks, they participate in the strong interaction. Leptons, on the other hand, are not composed of quarks and as such do not participate in the strong interaction. The best known baryons are protons and neutrons, which make up most of the mass of the visible matter in the universe, whereas electrons, the other major component of atoms, are leptons. Each baryon has a corresponding antiparticle, known as an antibaryon, in which quarks are replaced by their corresponding antiquarks. For example, a proton is made of two up quarks and one down quark, while its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark. Baryon properties These lists detail all known and predicted baryons in total angular momentum J =  and J =  configurations with positive parity. Baryons composed of one type of quark (uuu, ddd, ...) can exist in J =  configuration, but J =  is forbidden by the Pauli exclusion principle. Baryons composed of two types of quarks (uud, uus, ...) can exist in both J =  and J =  configurations. Baryons composed o" https://en.wikipedia.org/wiki/Head-related%20transfer%20function,"A head-related transfer function (HRTF), also known as anatomical transfer function (ATF), or a head shadow, is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5 kHz with a primary resonance of +17 dB at 2,700 Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person. A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space. It is a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones use HRTFs. Some forms of HRTF processing have also been included in computer software to simulate surround sound playback from loudspeakers. Sound localization Humans have just two ears, but can locate sounds in three dimensions – in range (distance), in direction above and below (elevation), in front and to the rear, as well as to either side (azimuth). This is possible because the brain, inner ear, and the external ears (pinna) work together to make inferences about location. This ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity since the eyes can only see a fraction of the world around a viewer, and vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy, regardless of the surrounding light. Humans estimate the location of a source by taking cues derived from one ear (monaura" https://en.wikipedia.org/wiki/Cockpit%20display%20system,"The Cockpit display systems (or CDS) provides the visible (and audible) portion of the Human Machine Interface (HMI) by which aircrew manage the modern Glass cockpit and thus interface with the aircraft avionics. History Prior to the 1970s, cockpits did not typically use any electronic instruments or displays (see Glass cockpit history). Improvements in computer technology, the need for enhancement of situational awareness in more complex environments, and the rapid growth of commercial air transportation, together with continued military competitiveness, led to increased levels of integration in the cockpit. The average transport aircraft in the mid-1970s had more than one hundred cockpit instruments and controls, and the primary flight instruments were already crowded with indicators, crossbars, and symbols, and the growing number of cockpit elements were competing for cockpit space and pilot attention. Architecture Glass cockpits routinely include high-resolution multi-color displays (often LCD displays) that present information relating to the various aircraft systems (such as flight management) in an integrated way. Integrated Modular Avionics (IMA) architecture allows for the integration of the cockpit instruments and displays at the hardware and software level to be maximized. CDS software typically uses API code to integrate with the platform (such as OpenGL to access the graphics drivers for example). This software may be written manually or with the help of COTS tools such as GL Studio, VAPS, VAPS XT or SCADE Display. Standards such as ARINC 661 specify the integration of the CDS at the software level with the aircraft system applications (called User Applications or UA). See also Acronyms and abbreviations in avionics Avionics software Integrated Modular Avionics" https://en.wikipedia.org/wiki/Food%20powder,"Food powder or powdery food is the most common format of dried solid food material that meets specific quality standards, such as moisture content, particle size, and particular morphology. Common powdery food products include milk powder, tea powder, cocoa powder, coffee powder, soybean flour, wheat flour, and chili powder. Powders are particulate discrete solid particles of size ranging from nanometres to millimetres that generally flow freely when shaken or tilted. The bulk powder properties are the combined effect of particle properties by the conversion of food products in solid state into powdery form for ease of use, processing and keeping quality. Various terms are used to indicate the particulate solids in bulk, such as powder, granules, flour and dust, though all these materials can be treated under powder category. These common terminologies are based on the size or the source of the materials. The particle size, distribution, shape and surface characteristics and the density of the powders are highly variable and depend on both the characteristics of the raw materials and processing conditions during their formations. These parameters contribute to the functional properties of powders, including flowability, packaging density, ease of handling, dust forming, mixing, compressibility and surface activity. Characteristics Microstructure Food powder may be amorphous or crystalline in their molecular level structure. Depending on the process applied, the powders can be produced in either of these forms. Powders in crystalline state possess defined molecular alignment in the long-range order, while amorphous state is disordered, more open and porous. Common powders found in crystalline states are salts, sugars and organic acids. Meanwhile, many food products such as dairy powders, fruit and vegetable powders, honey powders and hydrolysed protein powders are normally in amorphous state. The properties of food powders including their functionality and their" https://en.wikipedia.org/wiki/Legion%20%28taxonomy%29,"The legion, in biological classification, is a non-obligatory taxonomic rank within the Linnaean hierarchy sometimes used in zoology. Taxonomic rank In zoological taxonomy, the legion is: subordinate to the class superordinate to the cohort. consists of a group of related orders Legions may be grouped into superlegions or subdivided into sublegions, and these again into infralegions. Use in zoology Legions and their super/sub/infra groups have been employed in some classifications of birds and mammals. Full use is made of all of these (along with cohorts and supercohorts) in, for example, McKenna and Bell's classification of mammals. See also Linnaean taxonomy Mammal classification" https://en.wikipedia.org/wiki/Classical%20fluid,"Classical fluids are systems of particles which retain a definite volume, and are at sufficiently high temperatures (compared to their Fermi energy) that quantum effects can be neglected. A system of hard spheres, interacting only by hard collisions (e.g., billiards, marbles), is a model classical fluid. Such a system is well described by the Percus–Yevik equation. Common liquids, e.g., liquid air, gasoline etc., are essentially mixtures of classical fluids. Electrolytes, molten salts, salts dissolved in water, are classical charged fluids. A classical fluid when cooled undergoes a freezing transition. On heating it undergoes an evaporation transition and becomes a classical gas that obeys Boltzmann statistics. A system of charged classical particles moving in a uniform positive neutralizing background is known as a one-component plasma (OCP). This is well described by the Hyper-netted chain equation (see CHNC). An essentially very accurate way of determining the properties of classical fluids is provided by the method of molecular dynamics. An electron gas confined in a metal is not a classical fluid, whereas a very high-temperature plasma of electrons could behave as a classical fluid. Such non-classical Fermi systems, i.e., quantum fluids, can be studied using quantum Monte Carlo methods, Feynman path integral equation methods, and approximately via CHNC integral-equation methods. See also Bose–Einstein condensate Fermi liquid Many-body theory Quantum fluid" https://en.wikipedia.org/wiki/A%20Disappearing%20Number,"A Disappearing Number is a 2007 play co-written and devised by the Théâtre de Complicité company and directed and conceived by English playwright Simon McBurney. It was inspired by the collaboration during the 1910s between the pure mathematicians Srinivasa Ramanujan from India, and the Cambridge University don G.H. Hardy. It was a co-production between the UK-based theatre company Complicite and Theatre Royal, Plymouth, and Ruhrfestspiele, Wiener Festwochen, and the Holland Festival. A Disappearing Number premiered in Plymouth in March 2007, toured internationally, and played at The Barbican Centre in Autumn 2007 and 2008 and at Lincoln Center in July 2010. It was directed by Simon McBurney with music by Nitin Sawhney. The production is 110 minutes with no intermission. The piece was co-devised and written by the cast and company. The cast in order of appearance: Firdous Bamji, Saskia Reeves, David Annen, Paul Bhattacharjee, Shane Shambu, Divya Kasturi and Chetna Pandya. Plot Ramanujan first attracted Hardy's attention by writing him a letter in which he proved that where the notation indicates a Ramanujan summation. Hardy realised that this confusing presentation of the series 1 + 2 + 3 + 4 + ⋯ was an application of the Riemann zeta function with . Ramanujan's work became one of the foundations of bosonic string theory, a precursor of modern string theory. The play includes live tabla playing, which ""morphs seductively into pure mathematics"", as the Financial Times review put it, ""especially when … its rhythms shade into chants of number sequences reminiscent of the libretto to Philip Glass's Einstein on the Beach. One can hear the beauty of the sequences without grasping the rules that govern them."" The play has two strands of narrative and presents strong visual and physical theatre. It interweaves the passionate intellectual relationship between Hardy and the more intuitive Ramanujan, with the present-day story of Ruth, an English maths lecturer, and " https://en.wikipedia.org/wiki/Instruction%20window,"An instruction window in computer architecture refers to the set of instructions which can execute out-of-order in a speculative processor. In particular, in a conventional design, the instruction window consists of all instructions which are in the re-order buffer (ROB). In such a processor, any instruction within the instruction window can be executed when its operands are ready. Out-of-order processors derive their name because this may occur out-of-order (if operands to a younger instruction are ready before those of an older instruction). The instruction window has a finite size, and new instructions can enter the window (usually called dispatch or allocate) only when other instructions leave the window (usually called retire or commit). Instructions enter and leave the instruction window in program order, and an instruction can only leave the window when it is the oldest instruction in the window and it has been completed. Hence, the instruction window can be seen as a sliding window in which the instructions can become out-of-order. All execution within the window is speculative (i.e., side-effects are not applied outside the CPU) until it is committed in order to support asynchronous exception handling like interrupts. This paradigm is also known as restricted dataflow because instructions within the window execute in dataflow order (not necessarily in program order) but the window in which this occurs is restricted (of finite size). The instruction window is distinct from pipelining: instructions in an in-order pipeline are not in an instruction window in the conventionally understood sense, because they cannot execute out of order with respect to one another. Out-of-order processors are usually built around pipelines, but many of the pipeline stages (e.g., front-end instruction fetch and decode stages) are not considered to be part of the instruction window. See also Superscalar processor" https://en.wikipedia.org/wiki/Ergodic%20process,"In physics, statistics, econometrics and signal processing, a stochastic process is said to be in an ergodic regime if an observable's ensemble average equals the time average. In this regime, any collection of random samples from a process must represent the average statistical properties of the entire regime. Conversely, a process that is not in ergodic regime is said to be in non-ergodic regime. Specific definitions One can discuss the ergodicity of various statistics of a stochastic process. For example, a wide-sense stationary process has constant mean and autocovariance that depends only on the lag and not on time . The properties and are ensemble averages (calculated over all possible sample functions ), not time averages. The process is said to be mean-ergodic or mean-square ergodic in the first moment if the time average estimate converges in squared mean to the ensemble average as . Likewise, the process is said to be autocovariance-ergodic or d moment if the time average estimate converges in squared mean to the ensemble average , as . A process which is ergodic in the mean and autocovariance is sometimes called ergodic in the wide sense. Discrete-time random processes The notion of ergodicity also applies to discrete-time random processes for integer . A discrete-time random process is ergodic in mean if converges in squared mean to the ensemble average , as . Examples Ergodicity means the ensemble average equals the time average. Following are examples to illustrate this principle. Call centre Each operator in a call centre spends time alternately speaking and listening on the telephone, as well as taking breaks between calls. Each break and each call are of different length, as are the durations of each 'burst' of speaking and listening, and indeed so is the rapidity of speech at any given moment, which could each be modelled as a random process. Take N call centre operators (N should be a very large integer) and plot the" https://en.wikipedia.org/wiki/Application-specific%20integrated%20circuit,"An application-specific integrated circuit (ASIC ) is an integrated circuit (IC) chip customized for a particular use, rather than intended for general-purpose use, such as a chip designed to run in a digital voice recorder or a high-efficiency video codec. Application-specific standard product chips are intermediate between ASICs and industry standard integrated circuits like the 7400 series or the 4000 series. ASIC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology, as MOS integrated circuit chips. As feature sizes have shrunk and chip design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 logic gates to over 100 million. Modern ASICs often include entire microprocessors, memory blocks including ROM, RAM, EEPROM, flash memory and other large building blocks. Such an ASIC is often termed a SoC (system-on-chip). Designers of digital ASICs often use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs. Field-programmable gate arrays (FPGA) are the modern-day technology improvement on breadboards, meaning that they are not made to be application-specific as opposed to ASICs. Programmable logic blocks and programmable interconnects allow the same FPGA to be used in many different applications. For smaller designs or lower production volumes, FPGAs may be more cost-effective than an ASIC design, even in production. The non-recurring engineering (NRE) cost of an ASIC can run into the millions of dollars. Therefore, device manufacturers typically prefer FPGAs for prototyping and devices with low production volume and ASICs for very large production volumes where NRE costs can be amortized across many devices. History Early ASICs used gate array technology. By 1967, Ferranti and Interdesign were manufacturing early bipolar gate arrays. In 1967, Fairchild Semiconductor introduced the Micromatrix family of bipolar diode–t" https://en.wikipedia.org/wiki/List%20of%20accelerators%20in%20particle%20physics,"A list of particle accelerators used for particle physics experiments. Some early particle accelerators that more properly did nuclear physics, but existed prior to the separation of particle physics from that field, are also included. Although a modern accelerator complex usually has several stages of accelerators, only accelerators whose output has been used directly for experiments are listed. Early accelerators These all used single beams with fixed targets. They tended to have very briefly run, inexpensive, and unnamed experiments. Cyclotrons [1] The magnetic pole pieces and return yoke from the 60-inch cyclotron were later moved to UC Davis and incorporated into a 76-inch isochronous cyclotron which is still in use today Other early accelerator types Synchrotrons Fixed-target accelerators More modern accelerators that were also run in fixed target mode; often, they will also have been run as colliders, or accelerated particles for use in subsequently built colliders. High intensity hadron accelerators (Meson and neutron sources) Electron and low intensity hadron accelerators Colliders Electron–positron colliders Hadron colliders Electron-proton colliders Light sources Hypothetical accelerators Besides the real accelerators listed above, there are hypothetical accelerators often used as hypothetical examples or optimistic projects by particle physicists. Eloisatron (Eurasiatic Long Intersecting Storage Accelerator) was a project of INFN headed by Antonio Zichichi at the Ettore Majorana Foundation and Centre for Scientific Culture in Erice, Sicily. The center-of-mass energy was planned to be 200 TeV, and the size was planned to span parts of Europe and Asia. Fermitron was an accelerator sketched by Enrico Fermi on a notepad in the 1940s proposing an accelerator in stable orbit around the Earth. The undulator radiation collider is a design for an accelerator with a center-of-mass energy around the GUT scale. It would be light-weeks across a" https://en.wikipedia.org/wiki/Seccomp,"seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a ""secure"" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will either just log the event or terminate the process with SIGKILL or SIGSYS. In this sense, it does not virtualize the system's resources but isolates the process from them entirely. seccomp mode is enabled via the system call using the PR_SET_SECCOMP argument, or (since Linux kernel 3.17) via the system call. seccomp mode used to be enabled by writing to a file, /proc/self/seccomp, but this method was removed in favor of prctl(). In some kernel versions, seccomp disables the RDTSC x86 instruction, which returns the number of elapsed processor cycles since power-on, used for high-precision timing. seccomp-bpf is an extension to seccomp that allows filtering of system calls using a configurable policy implemented using Berkeley Packet Filter rules. It is used by OpenSSH and vsftpd as well as the Google Chrome/Chromium web browsers on ChromeOS and Linux. (In this regard seccomp-bpf achieves similar functionality, but with more flexibility and higher performance, to the older systrace—which seems to be no longer supported for Linux.) Some consider seccomp comparable to OpenBSD pledge(2) and FreeBSD capsicum(4). History seccomp was first devised by Andrea Arcangeli in January 2005 for use in public grid computing and was originally intended as a means of safely running untrusted compute-bound programs. It was merged into the Linux kernel mainline in kernel version 2.6.12, which was released on March 8, 2005. Software using seccomp or seccomp-bpf Android uses a seccomp-bpf filter in the zygote since Android 8.0 Oreo. systemd's sandboxing options are based on seccomp. QEMU, the Quick Emulator, the core component to the " https://en.wikipedia.org/wiki/Beta%20encoder,"A beta encoder is an analog-to-digital conversion (A/D) system in which a real number in the unit interval is represented by a finite representation of a sequence in base beta, with beta being a real number between 1 and 2. Beta encoders are an alternative to traditional approaches to pulse-code modulation. As a form of non-integer representation, beta encoding contrasts with traditional approaches to binary quantization, in which each value is mapped to the first N bits of its base-2 expansion. Rather than using base 2, beta encoders use base beta as a beta-expansion. In practice, beta encoders have attempted to exploit the redundancy provided by the non-uniqueness of the expansion in base beta to produce more robust results. An early beta encoder, the Golden ratio encoder used the golden ratio base for its value of beta, but was susceptible to hardware errors. Although integrator leaks in hardware elements make some beta encoders imprecise, specific algorithms can be used to provide exponentially accurate approximations for the value of beta, despite the imprecise results provided by some circuit components. An alternative design called the negative beta encoder (called so due to the negative eigenvalue of the transition probability matrix) has been proposed to further reduce the quantization error. See also Pulse-code modulation Quantization (signal processing) Sampling (signal processing)" https://en.wikipedia.org/wiki/Mathematics%20of%20paper%20folding,"The discipline of origami or paper folding has received a considerable amount of mathematical study. Fields of interest include a given paper model's flat-foldability (whether the model can be flattened without damaging it), and the use of paper folds to solve up-to cubic mathematical equations. Computational origami is a recent branch of computer science that is concerned with studying algorithms that solve paper-folding problems. The field of computational origami has also grown significantly since its inception in the 1990s with Robert Lang's TreeMaker algorithm to assist in the precise folding of bases. Computational origami results either address origami design or origami foldability. In origami design problems, the goal is to design an object that can be folded out of paper given a specific target configuration. In origami foldability problems, the goal is to fold something using the creases of an initial configuration. Results in origami design problems have been more accessible than in origami foldability problems. History In 1893, Indian civil servant T. Sundara Row published Geometric Exercises in Paper Folding which used paper folding to demonstrate proofs of geometrical constructions. This work was inspired by the use of origami in the kindergarten system. Row demonstrated an approximate trisection of angles and implied construction of a cube root was impossible. In 1922, Harry Houdini published ""Houdini's Paper Magic,"" which described origami techniques that drew informally from mathematical approaches that were later formalized. In 1936 Margharita P. Beloch showed that use of the 'Beloch fold', later used in the sixth of the Huzita–Hatori axioms, allowed the general cubic equation to be solved using origami. In 1949, R C Yeates' book ""Geometric Methods"" described three allowed constructions corresponding to the first, second, and fifth of the Huzita–Hatori axioms. The Yoshizawa–Randlett system of instruction by diagram was introduced in 1961. I" https://en.wikipedia.org/wiki/Floral%20formula,"A floral formula is a notation for representing the structure of particular types of flowers. Such notations use numbers, letters and various symbols to convey significant information in a compact form. They may represent the floral form of a particular species, or may be generalized to characterize higher taxa, usually giving ranges of numbers of organs. Floral formulae are one of the two ways of describing flower structure developed during the 19th century, the other being floral diagrams. The format of floral formulae differs according to the tastes of particular authors and periods, yet they tend to convey the same information. A floral formula is often used along with a floral diagram. History Floral formulae were developed at the beginning of the 19th century. The first authors using them were Cassel (1820) who first devised lists of integers to denote numbers of parts in named whorls; and Martius (1828). Grisebach (1854) used 4-integer series to represent the 4 whorls of floral parts in his textbook to describe characteristics of floral families, stating numbers of different organs separated by commas and highlighting fusion. Sachs (1873) used them together with floral diagrams, he noted their advantage of being composed of ""ordinary typeface"". Although Eichler widely used floral diagrams in his Blüthendiagramme, he used floral formulae sparingly, mainly for families with simple flowers. Sattler's Organogenesis of Flowers (1973) takes advantage of floral formulae and diagrams to describe the ontogeny of 50 plant species. Newer books containing formulae include Plant Systematics by Judd et al. (2002) and Simpson (2010). Prenner et al. devised an extension of the existing model to broaden the descriptive capability of the formula and argued that formulae should be included in formal taxonomic descriptions. Ronse De Craene (2010) partially utilized their way of writing the formulae in his book Floral Diagrams. Contained information Organ numbers and fus" https://en.wikipedia.org/wiki/Bode%20plot,"In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude (usually in decibels) of the frequency response, and a Bode phase plot, expressing the phase shift. As originally conceived by Hendrik Wade Bode in the 1930s, the plot is an asymptotic approximation of the frequency response, using straight line segments. Overview Among his several important contributions to circuit theory and control theory, engineer Hendrik Wade Bode, while working at Bell Labs in the 1930s, devised a simple but accurate method for graphing gain and phase-shift plots. These bear his name, Bode gain plot and Bode phase plot. ""Bode"" is often pronounced , although the Dutch pronunciation is Bo-duh. (). Bode was faced with the problem of designing stable amplifiers with feedback for use in telephone networks. He developed the graphical design technique of the Bode plots to show the gain margin and phase margin required to maintain stability under variations in circuit characteristics caused during manufacture or during operation. The principles developed were applied to design problems of servomechanisms and other feedback control systems. The Bode plot is an example of analysis in the frequency domain. Definition The Bode plot for a linear, time-invariant system with transfer function ( being the complex frequency in the Laplace domain) consists of a magnitude plot and a phase plot. The Bode magnitude plot is the graph of the function of frequency (with being the imaginary unit). The -axis of the magnitude plot is logarithmic and the magnitude is given in decibels, i.e., a value for the magnitude is plotted on the axis at . The Bode phase plot is the graph of the phase, commonly expressed in degrees, of the transfer function as a function of . The phase is plotted on the same logarithmic -axis as the magnitude plot, but the value for the phase is pl" https://en.wikipedia.org/wiki/RF%20CMOS,"RF CMOS is a metal–oxide–semiconductor (MOS) integrated circuit (IC) technology that integrates radio-frequency (RF), analog and digital electronics on a mixed-signal CMOS (complementary MOS) RF circuit chip. It is widely used in modern wireless telecommunications, such as cellular networks, Bluetooth, Wi-Fi, GPS receivers, broadcasting, vehicular communication systems, and the radio transceivers in all modern mobile phones and wireless networking devices. RF CMOS technology was pioneered by Pakistani engineer Asad Ali Abidi at UCLA during the late 1980s to early 1990s, and helped bring about the wireless revolution with the introduction of digital signal processing in wireless communications. The development and design of RF CMOS devices was enabled by van der Ziel's FET RF noise model, which was published in the early 1960s and remained largely forgotten until the 1990s. History Pakistani engineer Asad Ali Abidi, while working at Bell Labs and then UCLA during the 1980s1990s, pioneered radio research in metal–oxide–semiconductor (MOS) technology and made seminal contributions to radio architecture based on complementary MOS (CMOS) switched-capacitor (SC) technology. In the early 1980s, while working at Bell, he worked on the development of sub-micron MOSFET (MOS field-effect transistor) VLSI (very large-scale integration) technology, and demonstrated the potential of sub-micron NMOS integrated circuit (IC) technology in high-speed communication circuits. Abidi's work was initially met with skepticism from proponents of GaAs and bipolar junction transistors, the dominant technologies for high-speed communication circuits at the time. In 1985 he joined the University of California, Los Angeles (UCLA), where he pioneered RF CMOS technology during the late 1980s to early 1990s. His work changed the way in which RF circuits would be designed, away from discrete bipolar transistors and towards CMOS integrated circuits. Abidi was researching analog CMOS circuits for s" https://en.wikipedia.org/wiki/GreenPAK,"GreenPAK™ is a Renesas Electronics' family of mixed-signal integrated circuits and development tools. GreenPAK circuits are classified as configurable mixed-signal ICs. This category is characterized by analog and digital blocks that can be configured through programmable non-volatile memory. These devices also have a ""Connection Matrix"", which supports routing signals between the various blocks. These devices can include multiple components within a single IC. Also, the company developed the Go Configure™ Software Hub for IC design creation, chip emulation, and programming. History The GreenPAK technology was developed by Silego Technology Inc. The company was established in 2001. The GreenPAK product line was introduced in April 2010. Then, the first generation of ICs was released. Later, Silego was acquired by Dialog Semiconductor PLC in 2017. Officially, the trademark for the GreenPAK title was registered in 2019. Currently, in the market, the sixth generation of GreenPAK ICs was already released. Over 6 billion GreenPAK ICs have been shipped to Dialog's customers all over the world. In 2021, Dialog was acquired by Renesas Electronics, therefore the GreenPAK technology is currently officially owned by Renesas. GreenPAK Integrated Circuits There are a few categories of ICs developed within the GreenPAK technology: Dual Supply GreenPAK – provides level translation from higher or lower voltage domains. GreenPAK with Power Switches – includes single and dual power switches up to 2A. GreenPAK with Asynchronous State Machine – allows developing customized state machine IC designs. GreenPAK with Low Power Dropout Regulators – enables a user to divide power loads using the unique concept of ""Flexible Power Islands"" devoted to wearable devices. GreenPAK with In-System Programmability – can be reprogrammed up to 1000 times using the I2C serial interface. Automotive GreenPAK – allows multiple system functions in a single IC used for automotive circuit designs." https://en.wikipedia.org/wiki/Network-neutral%20data%20center,"A network-neutral data center (or carrier-neutral data center) is a data center (or carrier hotel) which allows interconnection between multiple telecommunication carriers and/or colocation providers. Network-neutral data centers exist all over the world and vary in size and power. While some data centers are owned and operated by a telecommunications or Internet service provider, the majority of network-neutral data centers are operated by a third party who has little or no part in providing Internet service to the end-user. This encourages competition and diversity as a server in a colocation centre can have one provider, multiple providers or only connect back to the headquarters of the company who owns the server. It has become increasingly more common for telecommunication operators to provide network neutral data centers. One benefit of hosting in a network-neutral data center is the ability to switch providers without physically moving the server to another location." https://en.wikipedia.org/wiki/System%20requirements%20specification,"A System Requirements Specification (SyRS) (abbreviated SysRS to be distinct from a software requirements specification (SRS)) is a structured collection of information that embodies the requirements of a system. A business analyst (BA), sometimes titled system analyst, is responsible for analyzing the business needs of their clients and stakeholders to help identify business problems and propose solutions. Within the systems development life cycle domain, the BA typically performs a liaison function between the business side of an enterprise and the information technology department or external service providers. See also Business analysis Business process reengineering Business requirements Concept of operations Data modeling Information technology Process modeling Requirement Requirements analysis Software requirements specification Systems analysis Use case" https://en.wikipedia.org/wiki/Wahoo%20Fitness,"Wahoo Fitness is a fitness technology company based in Atlanta. Its CEO is Mike Saturnia. Founded in 2009 by Chip Hawkins, Wahoo Fitness has offices in London, Berlin, Tokyo, Boulder and Brisbane. Wahoo's portfolio of cycling industry products includes the KICKR family of Indoor Cycling Trainers and Accessories, the ELEMNT family of GPS Cycling Computers and sport watches, the TICKR family of Heart Rate Monitors, SPEEDPLAY Advanced Road Pedal systems and the Wahoo SYSTM Training App. Main products Indoor trainers & smart bikes KICKR Direct Drive Smart Trainer KICKR CORE Direct Drive Smart Trainer KICKR SNAP Wheel-On Smart Trainer KICKR BIKE Indoor Smart Bike KICKR ROLLR Smart Trainer GPS cycling computers & smart watches ELEMNT ROAM GPS Bike Computer ELEMNT BOLT GPS Bike Computer ELEMNT RIVAL GPS Multisport Watch Heart rate monitors TICKR Heart Rate Monitor TICKR FIT Heart Rate Armband TICKR X Heart Rate Monitor Cycling sensors RPM Cadence Sensor RPM Speed Sensor RPM Sensor Bundle BLUE SC Speed and Cadence Sensor Indoor training accessories KICKR HEADWIND Smart Fan KICKR CLIMB Indoor Grade Simulator KICKR AXIS Action Feet KICKR Indoor Training Desk KICKR Floormat Pedals POWRLINK ZERO Power Pedal System SPEEDPLAY AERO Stainless Steel Aerodynamic Road Pedals SPEEDPLAY NANO Titanium Road Pedals SPEEDPLAY ZERO Stainless Steel Road Pedals SPEEDPLAY COMP Chromoly Road Pedals Standard Tension Cleat Easy Tension Cleat Training Wahoo SYSTM Training App Acquisitions September 2019 – Pedal manufacturer, Speedplay July 2019 – Indoor training platform, The Sufferfest, later rebranded to Wahoo SYSTM April 2022 – Indoor training platform, RGT (Road Grand Tour) later rebranded to Wahoo RGT Funding and investment 2010 – Private Investment July 2018 – Norwest Equity Partners Q3 2021 – Rhône Group May 17, 2023 – Wahoo announces Wahoo Fitness Founder Buys Company Back from Banks Team sponsorship Wahoo is an official sponsor for: Women's cy" https://en.wikipedia.org/wiki/Ultra-low-voltage%20processor,"Ultra-low-voltage processors (ULV processors) are a class of microprocessor that are deliberately underclocked to consume less power (typically 17 W or below), at the expense of performance. These processors are commonly used in subnotebooks, netbooks, ultraportables and embedded devices, where low heat dissipation and long battery life are required. Notable examples Intel Atom – Up to 2.0 GHz at 2.4 W (Z550) Intel Pentium M – Up to 1.3 GHz at 5 W (ULV 773) Intel Core 2 Solo – Up to 1.4 GHz at 5.5 W (SU3500) Intel Core Solo – Up to 1.3 GHz at 5.5 W (U1500) Intel Celeron M – Up to 1.2 GHz at 5.5 W (ULV 722) VIA Eden – Up to 1.5 GHz at 7.5 W VIA C7 – Up to 1.6 GHz at 8 W (C7-M ULV) VIA Nano – Up to 1.3 GHz at 8 W (U2250) AMD Athlon Neo – Up to 1 GHz at 8 W (Sempron 200U) AMD Geode – Up to 1 GHz at 9 W (NX 1500) Intel Core 2 Duo – Up to 1.3 GHz at 10 W (U7700) Intel Core i3/i5/i7 – Up to 1.5 GHz at 13 W (Core i7 3689Y) AMD A Series – Up to 3.2 GHz at 15 W (A10-7300P) See also Consumer Ultra-Low Voltage – a low power platform developed by Intel" https://en.wikipedia.org/wiki/Cranial%20evolutionary%20allometry,"Cranial evolutionary allometry (CREA) is a scientific theory regarding trends in the shape of mammalian skulls during the course of evolution in accordance with body size (i.e., allometry). Specifically, the theory posits that there is a propensity among closely related mammalian groups for the skulls of the smaller species to be short and those of the larger species to be long. This propensity appears to hold true for placental as well as non-placental mammals, and is highly robust. Examples of groups which exhibit this characteristic include antelopes, fruit bats, mongooses, squirrels and kangaroos as well as felids. It is believed that the reason for this trend has to do with size-related constraints on the formation and development of the mammalian skull. Facial length is one of the best known examples of heterochrony." https://en.wikipedia.org/wiki/Molecular%20gastronomy,"Molecular gastronomy is the scientific approach of cuisine from primarily the perspective of chemistry. The composition (molecular structure), properties (mass, viscosity, etc) and transformations (chemical reactions, reactant products) of an ingredient are addressed and utilized in the preparation and appreciation of the ingested products. It is a branch of food science that approaches the preparation and enjoyment of nutrition from the perspective of a scientist at the scale of atoms, molecules, and mixtures. Nicholas Kurti, Hungarian physicist, and Hervé This, at the INRA in France, coined ""Molecular and Physical Gastronomy"" in 1988. Examples Eponymous recipes New dishes named after famous scientists include: Gibbs – infusing vanilla pods in egg white with sugar, adding olive oil and then microwave cooking. Named after physicist Josiah Willard Gibbs (1839–1903). Vauquelin – using orange juice or cranberry juice with added sugar when whipping eggs to increase the viscosity and to stabilize the foam, and then microwave cooking. Named after Nicolas Vauquelin (1763–1829), one of Lavoisier's teachers. Baumé – soaking a whole egg for a month in alcohol to create a coagulated egg. Named after the French chemist Antoine Baumé (1728–1804). History There are many branches of food science that study different aspects of food, such as safety, microbiology, preservation, chemistry, engineering, and physics. Until the advent of molecular gastronomy, there was no branch dedicated to studying the chemical processes of cooking in the home and in restaurants. Food science has primarily been concerned with industrial food production and, while the disciplines may overlap, they are considered separate areas of investigation. The creation of the discipline of molecular gastronomy was intended to bring together what had previously been fragmented and isolated investigations into the chemical and physical processes of cooking into an organized discipline within food science, to " https://en.wikipedia.org/wiki/SpiNNaker,"SpiNNaker (spiking neural network architecture) is a massively parallel, manycore supercomputer architecture designed by the Advanced Processor Technologies Research Group (APT) at the Department of Computer Science, University of Manchester. It is composed of 57,600 processing nodes, each with 18 ARM9 processors (specifically ARM968) and 128 MB of mobile DDR SDRAM, totalling 1,036,800 cores and over 7 TB of RAM. The computing platform is based on spiking neural networks, useful in simulating the human brain (see Human Brain Project). The completed design is housed in 10 19-inch racks, with each rack holding over 100,000 cores. The cards holding the chips are held in 5 blade enclosures, and each core emulates 1,000 neurons. In total, the goal is to simulate the behaviour of aggregates of up to a billion neurons in real time. This machine requires about 100 kW from a 240 V supply and an air-conditioned environment. SpiNNaker is being used as one component of the neuromorphic computing platform for the Human Brain Project. On 14 October 2018 the HBP announced that the million core milestone had been achieved. On 24 September 2019 HBP announced that an 8 million euro grant, that will fund construction of the second generation machine, (called SpiNNcloud) has been given to TU Dresden." https://en.wikipedia.org/wiki/List%20of%20alternative%20set%20theories,"In mathematical logic, an alternative set theory is any of the alternative mathematical approaches to the concept of set and any alternative to the de facto standard set theory described in axiomatic set theory by the axioms of Zermelo–Fraenkel set theory. Alternative set theories Alternative set theories include: Vopěnka's alternative set theory Von Neumann–Bernays–Gödel set theory Morse–Kelley set theory Tarski–Grothendieck set theory Ackermann set theory Type theory New Foundations Positive set theory Internal set theory Naive set theory S (set theory) Kripke–Platek set theory Scott–Potter set theory Constructive set theory Zermelo set theory General set theory See also Non-well-founded set theory Notes Systems of set theory Mathematics-related lists" https://en.wikipedia.org/wiki/Mathematics%20Subject%20Classification,"The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020. Structure The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used. The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example: 53 is the classification for differential geometry 53A is the classification for classical differential geometry 53A45 is the classification for vector and tensor analysis First level At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for ""History and Biography"", ""Mathematics Education"", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including: Fluid mechanics Quantum mechanics Geophysics Optics and electromagnetic theory All valid MSC classification codes must have at least the first-level identifier. Second level The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline. For example, for differential geometry, the top-level code is 53, and the second-level codes are: A for classical differential geometry B for local differential geometry C for glo" https://en.wikipedia.org/wiki/Hydrobiology,"Hydrobiology is the science of life and life processes in water. Much of modern hydrobiology can be viewed as a sub-discipline of ecology but the sphere of hydrobiology includes taxonomy, economic and industrial biology, morphology, and physiology. The one distinguishing aspect is that all fields relate to aquatic organisms. Most work is related to limnology and can be divided into lotic system ecology (flowing waters) and lentic system ecology (still waters). One of the significant areas of current research is eutrophication. Special attention is paid to biotic interactions in plankton assemblage including the microbial loop, the mechanism of influencing algal blooms, phosphorus load, and lake turnover. Another subject of research is the acidification of mountain lakes. Long-term studies are carried out on changes in the ionic composition of the water of rivers, lakes and reservoirs in connection with acid rain and fertilization. One goal of current research is elucidation of the basic environmental functions of the ecosystem in reservoirs, which are important for water quality management and water supply. Much of the early work of hydrobiologists concentrated on the biological processes utilized in sewage treatment and water purification especially slow sand filters. Other historically important work sought to provide biotic indices for classifying waters according to the biotic communities that they supported. This work continues to this day in Europe in the development of classification tools for assessing water bodies for the EU water framework directive. A hydrobiologist technician conducts field analysis for hydrobiology. They identify plants and living species, locate their habitat, and count them. They also identify pollutants and nuisances that can affect the aquatic fauna and flora. They take the samples and write reports of their observations for publications. A hydrobiologist engineer intervenes more in the process of the study. They define the inte" https://en.wikipedia.org/wiki/Shaheen-III,"The Shaheen-III ( ; lit. Falcon), is a supersonic and land-based medium range ballistic missile, which was test fired for the first time by military service on 9 March 2015 . Development began in secrecy in the early 2000s in response to India's Agni-III, Shaheen was successfully tested on 9 March 2015 with a range of , which enables it to strike all of India and reach deep into the Middle East parts of North Africa. The Shaheen-III, according to its program manager, the Strategic Plans Division, is ""18 times faster than speed of sound and designed to reach the Indian islands of Andaman and Nicobar so that India cannot use them as ""strategic bases”"" to establish a second strike capability.” The Shaheen program is composed of the solid-fuel system in contrast to the Ghauri program that is primarily based on liquid-fuel system. With the successful launch of the Shaheen-III, it surpasses the range of Shaheen-II— hence, it is the longest-range missile to be launched by the military. Its deployment has not been commented by the Pakistani military but Shaheen-III is currently deemed as operational in the strategic command of Pakistan Army. Overview Development history Development of a long-range space launch vehicle began in 1999 with an aim of a rocket engines reaching the range of to . The Indian military had moved its strategic commands to east and the range of was determined by a need to be able to target the Nicobar and Andaman Islands in the eastern part of the Indian Ocean that are ""developed as strategic bases"" where ""Indian military might think of putting its weapons”, according to Shaheen-III's program manager, the Special Planning Division. With this mission, Shaheen-III was actively pursued alongside with [[Ghauri-III|Ghauri-III''']]. In 2000, the Space Research Commission concluded at least two design studies for its space launch vehicle. Initially, there were two earlier designs were shown in IDEAS held in 2002 and its design was centered on develo" https://en.wikipedia.org/wiki/Nullator,"In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time. Inserting a nullator in a circuit schematic imposes a mathematical constraint on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements needed to meet the condition. For example, the inputs of an ideal operational amplifier (with negative feedback) behave like a nullator, as they draw no current and have no voltage across them, and these conditions are used to analyze the circuitry surrounding the operational amplifier. A nullator is normally paired with a norator to form a nullor. Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage)." https://en.wikipedia.org/wiki/Hexagonal%20Efficient%20Coordinate%20System,"The Hexagonal Efficient Coordinate System (HECS), formerly known as Array Set Addressing (ASA), is a coordinate system for hexagonal grids that allows hexagonally sampled images to be efficiently stored and processed on digital systems. HECS represents the hexagonal grid as a set of two interleaved rectangular sub-arrays, which can be addressed by normal integer row and column coordinates and are distinguished with a single binary coordinate. Hexagonal sampling is the optimal approach for isotropically band-limited two-dimensional signals and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling. The HECS system enables the use of hexagonal sampling for digital imaging applications without requiring significant additional processing to address the hexagonal array. Introduction The advantages of sampling on a hexagonal grid instead of the standard rectangular grid for digital imaging applications include: more efficient sampling, consistent connectivity, equidistant neighboring pixels, greater angular resolution, and higher circular symmetry. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Researchers have shown that the hexagonal grid is the optimal sampling lattice and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling for isotropically band-limited two-dimensional signals. Despite all of these advantages of hexagonal sampling over rectangular sampling, its application has been limited because of the lack of an efficient coordinate system. However that limitation has been removed with the recent development of HECS. Hexagonal Efficient Coordinate System Description The Hexagonal Efficient Coordinate System (HECS) is based on the idea of representing the hexagonal grid as a set of two rectangular arrays which can be individually indexed using familiar integer-value" https://en.wikipedia.org/wiki/Iodine%20in%20biology,"Iodine is an essential trace element in biological systems. It has the distinction of being the heaviest element commonly needed by living organisms as well as the second-heaviest known to be used by any form of life (only tungsten, a component of a few bacterial enzymes, has a higher atomic number and atomic weight). It is a component of biochemical pathways in organisms from all biological kingdoms, suggesting its fundamental significance throughout the evolutionary history of life. Iodine is critical to the proper functioning of the vertebrate endocrine system, and plays smaller roles in numerous other organs, including those of the digestive and reproductive systems. An adequate intake of iodine-containing compounds is important at all stages of development, especially during the fetal and neonatal periods, and diets deficient in iodine can present serious consequences for growth and metabolism. Vertebrate functions Thyroid In vertebrate biology, iodine's primary function is as a constituent of the thyroid hormones, thyroxine (T4) and triiodothyronine (T3). These molecules are made from addition-condensation products of the amino acid tyrosine, and are stored prior to release in an iodine-containing protein called thyroglobulin. T4 and T3 contain four and three atoms of iodine per molecule, respectively; iodine accounts for 65% of the molecular weight of T4 and 59% of T3. The thyroid gland actively absorbs iodine from the blood to produce and release these hormones into the blood, actions which are regulated by a second hormone, called thyroid-stimulating hormone (TSH), which is produced by the pituitary gland. Thyroid hormones are phylogenetically very old molecules which are synthesized by most multicellular organisms, and which even have some effect on unicellular organisms. Thyroid hormones play a fundamental role in biology, acting upon gene transcription mechanisms to regulate the basal metabolic rate. T3 acts on small intestine cells and adipocytes to" https://en.wikipedia.org/wiki/Long-slit%20spectroscopy,"In astronomy, long-slit spectroscopy involves observing a celestial object using a spectrograph in which the entrance aperture is an elongated, narrow slit. Light entering the slit is then refracted using a prism, diffraction grating, or grism. The dispersed light is typically recorded on a charge-coupled device detector. Velocity profiles This technique can be used to observe the rotation curve of a galaxy, as those stars moving towards the observer are blue-shifted, while stars moving away are red-shifted. Long-slit spectroscopy can also be used to observe the expansion of optically-thin nebulae. When the spectrographic slit extends over the diameter of a nebula, the lines of the velocity profile meet at the edges. In the middle of the nebula, the line splits in two, since one component is redshifted and one is blueshifted. The blueshifted component will appear brighter as it is on the ""near side"" of the nebula, and is as such subject to a smaller degree of attenuation as the light coming from the far side of the nebula. The tapered edges of the velocity profile stem from the fact that the material at the edge of the nebula is moving perpendicular to the line of sight and so its line of sight velocity will be zero relative to the rest of the nebula. Several effects can contribute to the transverse broadening of the velocity profile. Individual stars themselves rotate as they orbit, so the side approaching will be blueshifted and the side moving away will be redshifted. Stars also have random (as well as orbital) motion around the galaxy, meaning any individual star may depart significantly from the rest relative to its neighbours in the rotation curve. In spiral galaxies this random motion is small compared to the low-eccentricity orbital motion, but this is not true for an elliptical galaxy. Molecular-scale Doppler broadening will also contribute. Advantages Long-slit spectroscopy can ameliorate problems with contrast when observing structures near a very lu" https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20statistics,"In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible. The expected number of particles with energy for Maxwell–Boltzmann statistics is where: is the energy of the i-th energy level, is the average number of particles in the set of states with energy , is the degeneracy of energy level i, that is, the number of states with energy which may nevertheless be distinguished from each other by some other means, μ is the chemical potential, k is the Boltzmann constant, T is absolute temperature, N is the total number of particles: Z is the partition function: e is Euler's number Equivalently, the number of particles is sometimes expressed as where the index i now specifies a particular state rather than the set of all states with energy , and . History Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system. Applicability Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces. Maxwell–Boltzmann statistics is often described as the statistics of ""distinguishable"" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in" https://en.wikipedia.org/wiki/Automated%20species%20identification,"Automated species identification is a method of making the expertise of taxonomists available to ecologists, parataxonomists and others via digital technology and artificial intelligence. Today, most automated identification systems rely on images depicting the species for the identification. Based on precisely identified images of a species, a classifier is trained. Once exposed to a sufficient amount of training data, this classifier can then identify the trained species on previously unseen images. Introduction The automated identification of biological objects such as insects (individuals) and/or groups (e.g., species, guilds, characters) has been a dream among systematists for centuries. The goal of some of the first multivariate biometric methods was to address the perennial problem of group discrimination and inter-group characterization. Despite much preliminary work in the 1950s and '60s, progress in designing and implementing practical systems for fully automated object biological identification has proven frustratingly slow. As recently as 2004 Dan Janzen updated the dream for a new audience:
The spaceship lands. He steps out. He points it around. It says 'friendly–unfriendly—edible–poisonous—safe– dangerous—living–inanimate'. On the next sweep it says Quercus oleoides—Homo sapiens—Spondias mombin—Solanum nigrum—Crotalus durissus—Morpho peleides''—serpentine'. This has been in my head since reading science fiction in ninth grade half a century ago.
The species identification problem Janzen's preferred solution to this classic problem involved building machines to identify species from their DNA. However, recent developments in computer architectures, as well as innovations in software design, have placed the tools needed to realize Janzen's vision in the hands of the systematics and computer science community not in several years hence, but now; and not just for creating DNA barcodes, but also for identification based on " https://en.wikipedia.org/wiki/Pathology,"Pathology is the study of disease and injury. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of ""general pathology"", an area which includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, ""a pathology"" may also refer to the predicted or actual progression of particular diseases (as in the statement ""the many different forms of cancer have diverse pathologies"", in which case a more proper choice of word would be ""pathophysiologies""), and the affix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist. As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology). Pathology is a significant field in modern medical diagnosis and me" https://en.wikipedia.org/wiki/Hardware%20backdoor,"Hardware backdoors are backdoors in hardware, such as code inside hardware or firmware of computer chips. The backdoors may be directly implemented as hardware Trojans in the integrated circuit. Hardware backdoors are intended to undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods. They have also been considered for car hacking. Severity Hardware backdoors are considered to be highly problematic for several reasons. For instance, they cannot be removed by conventional means such as antivirus software. They can also circumvent other types of security, such as disk encryption. Lastly, they can also be injected during production where the user has no control. Examples Around 2008 the FBI reported that 3,500 counterfeit Cisco network components were discovered in the US with some of them having found their way into military and government facilities. In 2011 Jonathan Brossard demonstrated a proof-of-concept hardware backdoor called ""Rakshasa"" which can be installed by anyone with physical access to hardware. It uses coreboot to re-flash the BIOS with a SeaBIOS and iPXE benign bootkit built of legitimate, open-source tools and can fetch malware over the web at boot time. In 2012, Sergei Skorobogatov (from the University of Cambridge computer laboratory) and Woods controversially stated that they had found a backdoor in a military-grade FPGA device which could be exploited to access/modify sensitive information. It has been said that this was proven to be a software problem and not a deliberate attempt at sabotage that still brought to light the need for equipment manufacturers to ensure microchips operate as intended. In 2012 two mobile phones developed by Chinese device manufacturer ZTE were found to carry a backdoor to instantly gain root access via a password that had been hard-coded into the software. This was confirmed by security researcher Dmitri Alperovitch. U.S. sources have pointed the " https://en.wikipedia.org/wiki/Noise%20margin,"In electrical engineering, noise margin is the maximum voltage amplitude of extraneous signal that can be algebraically added to the noise-free worst-case input level without causing the output voltage to deviate from the allowable logic voltage level. It is commonly used in at least two contexts as follows: In communications system engineering, noise margin is the ratio by which the signal exceeds the minimum acceptable amount. It is normally measured in decibels. In a digital circuit, the noise margin is the amount by which the signal exceeds the threshold for a proper '0' (logic low) or '1' (logic high). For example, a digital circuit might be designed to swing between 0.0 and 1.2 volts, with anything below 0.2 volts considered a '0', and anything above 1.0 volts considered a '1'. Then the noise margin for a '0' would be the amount that a signal is below 0.2 volts, and the noise margin for a '1' would be the amount by which a signal exceeds 1.0 volt. In this case noise margins are measured as an absolute voltage, not a ratio. Noise margins for CMOS chips are usually much greater than those for TTL because the VOH min is closer to the power supply voltage and VOL max is closer to zero. Real digital inverters do not instantaneously switch from a logic high (1) to a logic low (0), there is some capacitance. While an inverter is transitioning from a logic high to low, there is an undefined region where the voltage cannot be considered high or low. This is considered a noise margin. There are two noise margins to consider: Noise margin high (NMH) and noise margin low (NML). NMH is the amount of voltage between an inverter transitioning from a logic high (1) to a logic low (0) and vice versa for NML. The equations are as follows: NMH ≡ VOH - VIH and NML ≡ VIL - VOL. Typically, in a CMOS inverter VOH will equal VDD and VOL will equal the ground potential, as mentioned above. VIH is defined as the highest input voltage at which the slope of the voltage transfer " https://en.wikipedia.org/wiki/Site%20reliability%20engineering,"Site reliability engineering (SRE) is a set of principles and practices that applies aspects of software engineering to IT infrastructure and operations. SRE claims to create highly reliable and scalable software systems. Although they are closely related, SRE is slightly different from DevOps. History The field of site reliability engineering originated at Google with Ben Treynor Sloss, who founded a site reliability team after joining the company in 2003. In 2016, Google employed more than 1,000 site reliability engineers. After originating at Google in 2003, the concept spread into the broader software development industry, and other companies subsequently began to employ site reliability engineers. The position is more common at larger web companies, as small companies often do not operate at a scale that would require dedicated SREs. Organizations that have adopted the concept include Airbnb, Dropbox, IBM, LinkedIn, Netflix, and Wikimedia. According to a 2021 report by the DevOps Institute, 22% of organizations in a survey of 2,000 respondents had adopted the SRE model. Definition Site reliability engineering, as a job role, may be performed by individual contributors or organized in teams, responsible for a combination of the following within a broader engineering organization: System availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Site reliability engineers often have backgrounds in software engineering, system engineering, or system administration. Focuses of SRE include automation, system design, and improvements to system resilience. Site reliability engineering, as a set of principles and practices, can be performed by anyone. SRE is similar to security engineering in that everyone is expected to contribute to good security practices, but a company may decide to eventually hire staff specialists for the job. Conversely, for securing internet systems, companies may hire securit" https://en.wikipedia.org/wiki/Certified%20software%20development%20professional,"Certified Software Development Professional (CSDP) is a vendor-neutral professional certification in software engineering developed by the IEEE Computer Society for experienced software engineering professionals. This certification was offered globally since 2001 through Dec. 2014. The certification program constituted an element of the Computer Society's major efforts in the area of Software engineering professionalism, along with the IEEE-CS and ACM Software Engineering 2004 (SE2004) Undergraduate Curricula Recommendations, and The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide 2004), completed two years later. As a further development of these elements, to facilitate the global portability of the software engineering certification, since 2005 through 2008 the International Standard ISO/IEC 24773:2008 ""Software engineering -- Certification of software engineering professionals -- Comparison framework"" has been developed. (Please, see an overview of this ISO/IEC JTC 1 and IEEE standardization effort in the article published by Stephen B. Seidman, CSDP. ) The standard was formulated in such a way, that it allowed to recognize the CSDP certification scheme as basically aligned with it, soon after the standard's release date, 2008-09-01. Several later revisions of the CSDP certification were undertaken with the aim of making the alignment more complete. In 2019, ISO/IEC 24773:2008 has been withdrawn and revised (by ISO/IEC 24773-1:2019 ). The certification was initially offered by the IEEE Computer Society to experienced software engineering and software development practitioners globally in 2001 in the course of the certification examination beta-testing. The CSDP certification program has been officially approved in 2002. After December 2014 this certification program has been discontinued, all issued certificates are recognized as valid forever. A number of new similar certifications were introduced by the IEEE Computer Society, includi" https://en.wikipedia.org/wiki/Triviality%20%28mathematics%29,"In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic. Trivial and nontrivial solutions In mathematics, the term ""trivial"" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others: Empty set: the set containing no or null members Trivial group: the mathematical group containing only the identity element Trivial ring: a ring defined on a singleton set ""Trivial"" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation where is a function whose derivative is . The trivial solution is the zero function while a nontrivial solution is the exponential function The differential equation with boundary conditions is important in mathematics and " https://en.wikipedia.org/wiki/List%20of%20the%20verified%20shortest%20people,"This list includes the shortest ever verified people in their lifetime or profession. The entries below are broken down into different categories which range from sex, to age group and occupations. Most of the sourcing is done by Guinness World Records which in the last decade has added new categories for ""mobile"" and ""non-mobile"" men and women. The world's shortest verified man is Chandra Bahadur Dangi, while for women Pauline Musters holds the record. Men Women Shortest pairs Shortest by age group This was Nisa's baby height, she later grew. This was Francis Joseph Flynn's shortest height, because he grew in height after age 16; he is not listed as one of the world's shortest men. Filed under ""Shortest woman to give birth"". Shortest by occupation Actors Artists and writers Athletes Politicians Others See also Dwarfism Pygmy peoples Caroline Crachami, a person about tall Little people (mythology) List of dwarfism organisations Dwarfs and pygmies in ancient Egypt List of tallest people" https://en.wikipedia.org/wiki/Psammon,"Psammon (from Greek ""psammos"", ""sand"") is a group of organisms inhabiting coastal sand moist — biota buried in sediments. Psammon is a part of water fauna, along with periphyton, plankton, nekton, and benthos. Psammon is also sometimes considered a part of benthos due to its near-bottom distribution. Psammon term is commonly used to refer to freshwater reservoirs such as lakes." https://en.wikipedia.org/wiki/Autocorrelation%20technique,"The autocorrelation technique is a method for estimating the dominating frequency in a complex signal, as well as its variance. Specifically, it calculates the first two moments of the power spectrum, namely the mean and variance. It is also known as the pulse-pair algorithm in radar theory. The algorithm is both computationally faster and significantly more accurate compared to the Fourier transform, since the resolution is not limited by the number of samples used. Derivation The autocorrelation of lag 1 can be expressed using the inverse Fourier transform of the power spectrum : If we model the power spectrum as a single frequency , this becomes: where it is apparent that the phase of equals the signal frequency. Implementation The mean frequency is calculated based on the autocorrelation with lag one, evaluated over a signal consisting of N samples: The spectral variance is calculated as follows: Applications Estimation of blood velocity and turbulence in color flow imaging used in medical ultrasonography. Estimation of target velocity in pulse-doppler radar External links A covariance approach to spectral moment estimation, Miller et al., IEEE Transactions on Information Theory. Doppler Radar Meteorological Observations Doppler Radar Theory. Autocorrelation technique described on p.2-11 Real-Time Two-Dimensional Blood Flow Imaging Using an Autocorrelation Technique, by Chihiro Kasai, Koroku Namekawa, Akira Koyano, and Ryozo Omoto, IEEE Transactions on Sonics and Ultrasonics, Vol. SU-32, No.3, May 1985. Radar theory Signal processing Autocorrelation" https://en.wikipedia.org/wiki/Gyrator%E2%80%93capacitor%20model,"The gyrator–capacitor model - sometimes also the capacitor-permeance model - is a lumped-element model for magnetic circuits, that can be used in place of the more common resistance–reluctance model. The model makes permeance elements analogous to electrical capacitance (see magnetic capacitance section) rather than electrical resistance (see magnetic reluctance). Windings are represented as gyrators, interfacing between the electrical circuit and the magnetic model. The primary advantage of the gyrator–capacitor model compared to the magnetic reluctance model is that the model preserves the correct values of energy flow, storage and dissipation. The gyrator–capacitor model is an example of a group of analogies that preserve energy flow across energy domains by making power conjugate pairs of variables in the various domains analogous. It fills the same role as the impedance analogy for the mechanical domain. Nomenclature Magnetic circuit may refer to either the physical magnetic circuit or the model magnetic circuit. Elements and dynamical variables that are part of the model magnetic circuit have names that start with the adjective magnetic, although this convention is not strictly followed. Elements or dynamical variables in the model magnetic circuit may not have a one to one correspondence with components in the physical magnetic circuit. Symbols for elements and variables that are part of the model magnetic circuit may be written with a subscript of M. For example, would be a magnetic capacitor in the model circuit. Electrical elements in an associated electrical circuit may be brought into the magnetic model for ease of analysis. Model elements in the magnetic circuit that represent electrical elements are typically the electrical dual of the electrical elements. This is because transducers between the electrical and magnetic domains in this model are usually represented by gyrators. A gyrator will transform an element into its dual. For example, a magn" https://en.wikipedia.org/wiki/Outline%20of%20discrete%20mathematics,"Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying ""smoothly"", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics, therefore, excludes topics in ""continuous mathematics"" such as calculus and analysis. Included below are many of the standard terms used routinely in university-level courses and in research papers. This is not, however, intended as a complete list of mathematical terms; just a selection of typical terms of art that may be encountered. Subjects in discrete mathematics Logic – a study of reasoning Modal Logic: A type of logic for the study of necessity and probability Set theory – a study of collections of elements Number theory – study of integers and integer-valued functions Combinatorics – a study of Counting Finite mathematics – a course title Graph theory – a study of graphs Digital geometry and digital topology Algorithmics – a study of methods of calculation Information theory – a mathematical representation of the conditions and parameters affecting the transmission and processing of information Computability and complexity theories – deal with theoretical and practical limitations of algorithms Elementary probability theory and Markov chains Linear algebra – a study of related linear equations Functions – an expression, rule, or law that defines a relationship between one variable (the independent variable) and another variable (the dependent variable) Partially ordered set – Probability – concerns with numerical descriptions of the chances of occurrence of an event Proofs – Relation – a collection of ordered pairs containing one object from each set Discrete mathematical disciplines For further reading in discrete mathematics, beyond a basic level, see thes" https://en.wikipedia.org/wiki/Synchronous%20virtual%20pipe,"When realizing pipeline forwarding a predefined schedule for forwarding a pre-allocated amount of bytes during one or more time frames along a path of subsequent switches establishes a synchronous virtual pipe (SVP). The SVP capacity is determined by the total number of bits allocated in every time cycle for the SVP. For example, for a 10 ms time cycle, if 20,000 bits are allocated during each of 2 time frames, the SVP capacity is 4 Mbit/s. Pipeline forwarding guarantees that reserved traffic, i.e., traveling on an SVP, experiences: bounded end-to-end delay, delay jitter lower than two TFs, and no congestion and resulting losses. Two implementations of the pipeline forwarding were proposed: time-driven switching (TDS) and time-driven priority (TDP) and can be used to create pipeline forwarding parallel network in the future Internet." https://en.wikipedia.org/wiki/List%20of%20chaotic%20maps,"In mathematics, a chaotic map is a map (namely, an evolution function) that exhibits some sort of chaotic behavior. Maps may be parameterized by a discrete-time or a continuous-time parameter. Discrete maps usually take the form of iterated functions. Chaotic maps often occur in the study of dynamical systems. Chaotic maps often generate fractals. Although a fractal may be constructed by an iterative procedure, some fractals are studied in and of themselves, as sets rather than in terms of the map that generates them. This is often because there are several different iterative procedures to generate the same fractal. List of chaotic maps List of fractals Cantor set de Rham curve Gravity set, or Mitchell-Green gravity set Julia set - derived from complex quadratic map Koch snowflake - special case of de Rham curve Lyapunov fractal Mandelbrot set - derived from complex quadratic map Menger sponge Newton fractal Nova fractal - derived from Newton fractal Quaternionic fractal - three dimensional complex quadratic map Sierpinski carpet Sierpinski triangle" https://en.wikipedia.org/wiki/Von%20Neumann%20architecture,"The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC. The document describes a design architecture for an electronic digital computer with these components: A processing unit with both an arithmetic logic unit and processor registers A control unit that includes an instruction register and a program counter Memory that stores data and instructions External mass storage Input and output mechanisms The term ""von Neumann architecture"" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system. The design of a von Neumann architecture machine is simpler than in a Harvard architecture machine—which is also a stored-program system, yet has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions. A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units. The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instru" https://en.wikipedia.org/wiki/Chirp,"A chirp is a signal in which the frequency increases (up-chirp) or decreases (down-chirp) with time. In some sources, the term chirp is used interchangeably with sweep signal. It is commonly applied to sonar, radar, and laser systems, and to other applications, such as in spread-spectrum communications (see chirp spread spectrum). This signal type is biologically inspired and occurs as a phenomenon due to dispersion (a non-linear dependence between frequency and the propagation speed of the wave components). It is usually compensated for by using a matched filter, which can be part of the propagation channel. Depending on the specific performance measure, however, there are better techniques both for radar and communication. Since it was used in radar and space, it has been adopted also for communication standards. For automotive radar applications, it is usually called linear frequency modulated waveform (LFMW). In spread-spectrum usage, surface acoustic wave (SAW) devices are often used to generate and demodulate the chirped signals. In optics, ultrashort laser pulses also exhibit chirp, which, in optical transmission systems, interacts with the dispersion properties of the materials, increasing or decreasing total pulse dispersion as the signal propagates. The name is a reference to the chirping sound made by birds; see bird vocalization. Definitions The basic definitions here translate as the common physics quantities location (phase), speed (angular velocity), acceleration (chirpyness). If a waveform is defined as: then the instantaneous angular frequency, ω, is defined as the phase rate as given by the first derivative of phase, with the instantaneous ordinary frequency, f, being its normalized version: Finally, the instantaneous angular chirpyness (symbol γ) is defined to be the second derivative of instantaneous phase or the first derivative of instantaneous angular frequency, Angular chirpyness has units of radians per square second (rad/s2); thus, i" https://en.wikipedia.org/wiki/Process,"A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic. Things called a process include: Business and management Business process, activities that produce a specific service or product for customers Business process modeling, activity of representing processes of an enterprise in order to deliver improvements Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured. Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management Process area, related processes within an area which together satisfies an important goal for improvements within that area Process costing, a cost allocation procedure of managerial accounting Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management Process-based management, is a management approach that views a business as a collection of processes Law Due process, the concept that governments must respect the rule of law Legal process, the proceedings and records of a legal case Service of process, the procedure of giving official notice of a legal proceeding Science and technology The general concept of the scientific process, see scientific method Process theory, the scientific study of processes Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output Biology and psychology Process (anatomy), a projection or outgrowth of tissue from a larger body Biological process, a process of a living organism Cognitive process, such as attention, memory, language use, reasoning, and problem solving Mental process, a function or processes of the mind Neuronal process, also neurite" https://en.wikipedia.org/wiki/Folk%20biology,"Folk biology (or folkbiology) is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into obvious species-like groups. The relationship between a folk taxonomy and a scientific classification can assist in understanding how evolutionary theory deals with the apparent constancy of ""common species"" and the organic processes centering on them. From the vantage of evolutionary psychology, such natural systems are arguably routine ""habits of mind"", a sort of heuristic used to make sense of the natural world." https://en.wikipedia.org/wiki/Vincent%27s%20theorem,"In mathematics, Vincent's theorem—named after Alexandre Joseph Hidulphe Vincent—is a theorem that isolates the real roots of polynomials with rational coefficients. Even though Vincent's theorem is the basis of the fastest method for the isolation of the real roots of polynomials, it was almost totally forgotten, having been overshadowed by Sturm's theorem; consequently, it does not appear in any of the classical books on the theory of equations (of the 20th century), except for Uspensky's book. Two variants of this theorem are presented, along with several (continued fractions and bisection) real root isolation methods derived from them. Sign variation Let c0, c1, c2, ... be a finite or infinite sequence of real numbers. Suppose l < r and the following conditions hold: If r = l+1 the numbers cl and cr have opposite signs. If r ≥ l+2 the numbers cl+1, ..., cr−1 are all zero and the numbers cl and cr have opposite signs. This is called a sign variation or sign change between the numbers cl and cr. When dealing with the polynomial p(x) in one variable, one defines the number of sign variations of p(x) as the number of sign variations in the sequence of its coefficients. Two versions of this theorem are presented: the continued fractions version due to Vincent, and the bisection version due to Alesina and Galuzzi. Vincent's theorem: Continued fractions version (1834 and 1836) If in a polynomial equation with rational coefficients and without multiple roots, one makes successive transformations of the form where are any positive numbers greater than or equal to one, then after a number of such transformations, the resulting transformed equation either has zero sign variations or it has a single sign variation. In the first case there is no root, whereas in the second case there is a single positive real root. Furthermore, the corresponding root of the proposed equation is approximated by the finite continued fraction: Moreover, if infinitely many numb" https://en.wikipedia.org/wiki/Heterogeneous%20network,"In computer networking, a heterogeneous network is a network connecting computers and other devices where the operating systems and protocols have significant differences. For example, local area networks (LANs) that connect Microsoft Windows and Linux based personal computers with Apple Macintosh computers are heterogeneous. Heterogeneous network also describes wireless networks using different access technologies. For example, a wireless network that provides a service through a wireless LAN and is able to maintain the service when switching to a cellular network is called a wireless heterogeneous network. HetNet Reference to a HetNet often indicates the use of multiple types of access nodes in a wireless network. A Wide Area Network can use some combination of macrocells, picocells, and femtocells in order to offer wireless coverage in an environment with a wide variety of wireless coverage zones, ranging from an open outdoor environment to office buildings, homes, and underground areas. Mobile experts define a HetNet as a network with complex interoperation between macrocell, small cell, and in some cases WiFi network elements used together to provide a mosaic of coverage, with handoff capability between network elements. A study from ARCchart estimates that HetNets will help drive the mobile infrastructure market to account for nearly US$57 billion in spending globally by 2017. Small Cell Forum defines the HetNet as ‘multi-x environment – multi-technology, multi-domain, multi-spectrum, multi-operator and multi-vendor. It must be able to automate the reconfiguration of its operation to deliver assured service quality across the entire network, and flexible enough to accommodate changing user needs, business goals and subscriber behaviours.’ HetNet architecture From an architectural perspective, the HetNet can be viewed as encompassing conventional macro radio access network (RAN) functions, RAN transport capability, small cells, and Wi-Fi functionality, " https://en.wikipedia.org/wiki/Hohlraum,"In radiation thermodynamics, a hohlraum (a non-specific German word for a ""hollow space"" or ""cavity"") is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. This idealized cavity can be approximated in practice by making a small perforation in the wall of a hollow container of any opaque material. The radiation escaping through such a perforation will be a good approximation to black-body radiation at the temperature of the interior of the container. Inertial confinement fusion The indirect drive approach to inertial confinement fusion is as follows: the fusion fuel capsule is held inside a cylindrical hohlraum. The hohlraum body is manufactured using a high-Z (high atomic number) element, usually gold or uranium. Inside the hohlraum is a fuel capsule containing deuterium and tritium (D-T) fuel. A frozen layer of D-T ice adheres inside the fuel capsule. The fuel capsule wall is synthesized using light elements such as plastic, beryllium, or high density carbon, i.e. diamond. The outer portion of the fuel capsule explodes outward when ablated by the x-rays produced by the hohlraum wall upon irradiation by lasers. Due to Newton's third law, the inner portion of the fuel capsule implodes, causing the D-T fuel to be supercompressed, activating a fusion reaction. The radiation source (e.g., laser) is pointed at the interior of the hohlraum rather than at the fuel capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays, a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion. The hohlraum walls must have surface roughness less than 1 micron, and hence accurate" https://en.wikipedia.org/wiki/MRB%20constant,"The MRB constant is a mathematical constant, with decimal expansion . The constant is named after its discoverer, Marvin Ray Burns, who published his discovery of the constant in 1999. Burns had initially called the constant ""rc"" for root constant but, at Simon Plouffe's suggestion, the constant was renamed the 'Marvin Ray Burns's Constant', or ""MRB constant"". The MRB constant is defined as the upper limit of the partial sums As grows to infinity, the sums have upper and lower limit points of −0.812140… and 0.187859…, separated by an interval of length 1. The constant can also be explicitly defined by the following infinite sums: The constant relates to the divergent series: There is no known closed-form expression of the MRB constant, nor is it known whether the MRB constant is algebraic, transcendental or even irrational." https://en.wikipedia.org/wiki/Square%20root%20of%205,"The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5. It is more precisely called the principal square root of 5, to distinguish it from the negative number with the same property. This number appears in the fractional expression for the golden ratio. It can be denoted in surd form as: It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded down to 2.236 to within 99.99% accuracy. The approximation (≈ 2.23611) for the square root of five can be used. Despite having a denominator of only 72, it differs from the correct value by less than (approx. ). As of January 2022, its numerical value in decimal has been computed to at least 2,250,000,000,000 digits. Rational approximations The square root of 5 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 9, 38, 161, … , and their denominators are 1, 4, 17, 72, … . Each of these is a best rational approximation of ; in other words, it is closer to than any rational number with a smaller denominator. The convergents, expressed as , satisfy alternately the Pell's equations When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction: The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial . The Newton's method update, , is equal to when . The method therefore converges quadratically. Relation to the golden ratio and Fibonacci numbers The golden ratio is the arithmetic mean of 1 and . The algebraic relationship between , the golden ratio and the conjugate of the golden ratio () is expressed in the following formulae: (See the section below for their geometrical interpretation as decompositions of a rectangle.) then naturall" https://en.wikipedia.org/wiki/Orthogonal%20signal%20correction,Orthogonal Signal Correction (OSC) is a spectral preprocessing technique that removes variation from a data matrix X that is orthogonal to the response matrix Y. OSC was introduced by researchers at the University of Umea in 1998 and has since found applications in domains including metabolomics. https://en.wikipedia.org/wiki/Transdifferentiation,"Transdifferentiation, also known as lineage reprogramming, is the process in which one mature somatic cell is transformed into another mature somatic cell without undergoing an intermediate pluripotent state or progenitor cell type. It is a type of metaplasia, which includes all cell fate switches, including the interconversion of stem cells. Current uses of transdifferentiation include disease modeling and drug discovery and in the future may include gene therapy and regenerative medicine. The term 'transdifferentiation' was originally coined by Selman and Kafatos in 1974 to describe a change in cell properties as cuticle producing cells became salt-secreting cells in silk moths undergoing metamorphosis. Discovery Davis et al. 1987 reported the first instance (sight) of transdifferentiation where a cell changed from one adult cell type to another. Forcing mouse embryonic fibroblasts to express MyoD was found to be sufficient to turn those cells into myoblasts. Natural examples The only known instances where adult cells change directly from one lineage to another occurs in the species Turritopsis dohrnii (also known as the immortal jellyfish) and Turritopsis nutricula. In newts, when the eye lens is removed, pigmented epithelial cells de-differentiate and then redifferentiate into the lens cells. Vincenzo Colucci described this phenomenon in 1891 and Gustav Wolff described the same thing in 1894; the priority issue is examined in Holland (2021). In humans and mice, it has been demonstrated that alpha cells in the pancreas can spontaneously switch fate and transdifferentiate into beta cells. This has been demonstrated for both healthy and diabetic human and mouse pancreatic islets. While it was previously believed that oesophageal cells were developed from the transdifferentiation of smooth muscle cells, that has been shown to be false. Induced and therapeutic examples The first example of functional transdifferentiation has been provided by Ferber et al. by i" https://en.wikipedia.org/wiki/Fluctuation%20loss,"Fluctuation loss is an effect seen in radar systems as the target object moves or changes its orientation relative to the radar system. It was extensively studied during the 1950s by Peter Swerling, who introduced the Swerling models to allow the effect to be simulated. For this reason, it is sometimes known as Swerling loss or similar names. The effect occurs when the target's physical size is within a key range of values relative to the wavelength of the radar signal. As the signal reflects off various parts of the target, they may interfere as they return to the radar receiver. At any single distance from the station, this will cause the signal to be amplified or diminished compared to the baseline signal one calculates from the radar equation. As the target moves, these patterns change. This causes the signal to fluctuate in strength and may cause it to disappear entirely at certain times. The effect can be reduced or eliminated by operating on more than one frequency or using modulation techniques like pulse compression that change the frequency over the period of a pulse. In these cases, it is unlikely that the pattern of reflections from the target causes the same destructive interference at two different frequencies. Swerling modeled these effects in a famous 1954 paper introduced while working at RAND Corporation. Swerling's models considered the contribution of multiple small reflectors, or many small reflectors and a single large one. This offered the ability to model real-world objects like aircraft to understand the expected fluctuation loss effects. Fluctuation loss For basic considerations of the strength of a signal returned by a given target, the radar equation models the target as a single point in space with a given radar cross-section (RCS). The RCS is difficult to estimate except for the most basic cases, like a perpendicular surface or a sphere. Before the introduction of detailed computer modeling, the RCS for real-world objects was gener" https://en.wikipedia.org/wiki/Quantum%20state%20space,"In physics, a quantum state space is an abstract space in which different ""positions"" represent, not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics. Relative to Hilbert space In quantum mechanics a state space is a complex Hilbert space in which each unit vector represents a different state that could come out of a measurement. The number of dimensions in this Hilbert space depends on the system we choose to describe. Any state vectors in this space can be written as a linear combination of unit vectors. Having an nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation. Examples The spin (physics) state of a silver atom in the Stern-Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as . The space of a two spin system has four states, . The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from to . In Dirac notation, the states in this space might be written as or . Relative to 3D space Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple QM problems. In 1929, Nevill Mott showed that ""tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace"" makes analysis of simple interaction problems more difficult. Mott analyzes -particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in QM, but the tracks observed are linear. " https://en.wikipedia.org/wiki/MAX232,"The MAX232 is an integrated circuit by Maxim Integrated Products, now a subsidiary of Analog Devices, that converts signals from a TIA-232 (RS-232) serial port to signals suitable for use in TTL-compatible digital logic circuits. The MAX232 is a dual transmitter / dual receiver that typically is used to convert the RX, TX, CTS, RTS signals. The drivers provide TIA-232 voltage level outputs (about ±7.5 volts) from a single 5-volt supply by on-chip charge pumps and external capacitors. This makes it useful for implementing TIA-232 in devices that otherwise do not need any other voltages. The receivers translates the TIA-232 input voltages (up to ±25 volts, though MAX232 supports up to ±30 volts) down to standard 5 volt TTL levels. These receivers have a typical threshold of 1.3 volts and a typical hysteresis of 0.5 volts. The MAX232 replaced an older pair of chips MC1488 and MC1489 that performed similar RS-232 translation. The MC1488 quad transmitter chip required 12 volt and −12 volt power, and MC1489 quad receiver chip required 5 volt power. The main disadvantages of this older solution was the ±12 volt power requirement, only supported 5 volt digital logic, and two chips instead of one. History The MAX232 was proposed by Charlie Allen and designed by Dave Bingham. Maxim Integrated Products announced the MAX232 no later than 1986. Versions The later MAX232A is backward compatible with the original MAX232 but may operate at higher baud rates and can use smaller external capacitors 0.1 μF in place of the 1.0 μF capacitors used with the original device. The newer MAX3232 and MAX3232E are also backwards compatible, but operates at a broader voltage range, from 3 to 5.5 V. Pin-to-pin compatible versions from other manufacturers are ICL232, SP232, ST232, ADM232 and HIN232. Texas Instruments makes compatible chips, using MAX232 as the part number. Voltage levels The MAX232 translates a TTL logic 0 input to between +3 and +15 V, and changes TTL logic 1 input to bet" https://en.wikipedia.org/wiki/Spontaneous%20absolute%20asymmetric%20synthesis,"Spontaneous absolute asymmetric synthesis is a chemical phenomenon that stochastically generates chirality based on autocatalysis and small fluctuations in the ratio of enantiomers present in a racemic mixture. In certain reactions which initially do not contain chiral information, stochastically distributed enantiomeric excess can be observed. The phenomenon is different from chiral amplification, where enantiomeric excess is present from the beginning and not stochastically distributed. Hence, when the experiment is repeated many times, the average enantiomeric excess approaches 0%. The phenomenon has important implications concerning the origin of homochirality in nature." https://en.wikipedia.org/wiki/Microscopic%20scale,"The microscopic scale () is the scale of objects and events smaller than those that can easily be seen by the naked eye, requiring a lens or microscope to see them clearly. In physics, the microscopic scale is sometimes regarded as the scale between the macroscopic scale and the quantum scale. Microscopic units and measurements are used to classify and describe very small objects. One common microscopic length scale unit is the micrometre (also called a micron) (symbol: μm), which is one millionth of a metre. History Whilst compound microscopes were first developed in the 1590s, the significance of the microscopic scale was only truly established in the 1600s when Marcello Malphigi and Antonie van Leeuwenhoek microscopically observed frog lungs and microorganisms. As microbiology was established, the significance of making scientific observations at a microscopic level increased. Published in 1665, Robert Hooke’s book Micrographia details his microscopic observations including fossils insects, sponges, and plants, which was possible through his development of the compound microscope. During his studies of cork, he discovered plant cells and coined the term ‘cell’. Prior to the use of the micro- prefix, other terms were originally incorporated into the International metric system in 1795, such as centi- which represented a factor of 10^-2, and milli-, which represented a factor of 10^-3. Over time the importance of measurements made at the microscopic scale grew, and an instrument named the Millionometre was developed by watch-making company owner Antoine LeCoultre in 1844. This instrument had the ability to precisely measure objects to the nearest micrometre. The British Association for the Advancement of Science committee incorporated the micro- prefix into the newly established CGS system in 1873. The micro- prefix was finally added to the official SI system in 1960, acknowledging measurements that were made at an even smaller level, denoting a factor of 10" https://en.wikipedia.org/wiki/Electrophoretic%20color%20marker,"An electrophoretic color marker is a chemical used to monitor the progress of agarose gel electrophoresis and polyacrylamide gel electrophoresis (PAGE) since DNA, RNA, and most proteins are colourless. The color markers are made up of a mixture of dyes that migrate through the gel matrix alongside the sample of interest. They are typically designed to have different mobilities from the sample components and to generate colored bands that can be used to assess the migration and separation of sample components. Color markers are often used as molecular weight standards, loading dyes, tracking dyes, or staining solutions. Molecular weight ladders are used to estimate the size of DNA and protein fragments by comparing their migration distance to that of the colored bands. DNA and protein standards are available commercially in a wide range of sizes, and are often provided with pre-stained or color-coded bands for easy identification. Loading dyes are usually added to the sample buffer before loading the sample onto the gel, and they migrate through the gel along with the sample to help track its progress during electrophoresis. Tracking dyes are added to the electrophoresis buffer rather to provide a visual marker of the buffer front. Staining solutions are applied after electrophoresis to visualize the sample bands, and are available in a range of colors. Different types of electrophoretic color markers are available commercially, with varying numbers and types of dyes or pigments used in the mixture. Some markers generate a series of colored bands with known mobilities, while others produce a single band of a specific color that can be used as a reference point. They are widely used in research, clinical diagnostics, and forensic science. Progress markers Loading buffers often contain anionic dyes that are visible under the visible light spectrum, and are added to the gel before the nucleic acid. Tracking dyes should not be reactive so as not to alter the sample, " https://en.wikipedia.org/wiki/Free%20particle,"In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a ""field-free"" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space. Classical free particle The classical free particle is characterized by a fixed velocity v. The momentum is given by and the kinetic energy (equal to total energy) by where m is the mass of the particle and v is the vector velocity of the particle. Quantum free particle Mathematical description A free particle with mass in non-relativistic quantum mechanics is described by the free Schrödinger equation: where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by a complex plane wave: with amplitude A and has two different rules according to its mass: if the particle has mass : (or equivalent ). if the particle is a massless particle: . The eigenvalue spectrum is infinitely degenerate since for each eigenvalue E>0, there corresponds an infinite number of eigenfunctions corresponding to different directions of . The De Broglie relations: , apply. Since the potential energy is (stated to be) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics: As for all quantum particles free or bound, the Heisenberg uncertainty principles apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, these stationary states can not correspond to physical realiz" https://en.wikipedia.org/wiki/Babel%20function,"The Babel function (also known as cumulative coherence) measures the maximum total coherence between a fixed atom and a collection of other atoms in a dictionary. The Babel function was conceived of in the context of signals for which there exists a sparse representation consisting of atoms or columns of a redundant dictionary matrix, A. Definition and formulation The Babel function of a dictionary with normalized columns is a real-valued function that is defined as where are the columns (atoms) of the dictionary . Special case When p=1, the babel function is the mutual coherence. Practical Applications Li and Lin have used the Babel function to aid in creating effective dictionaries for Machine Learning applications." https://en.wikipedia.org/wiki/List%20of%20mathematical%20logic%20topics,"This is a list of mathematical logic topics. For traditional syllogistic logic, see the list of topics in logic. See also the list of computability and complexity topics for more theory of algorithms. Working foundations Peano axioms Giuseppe Peano Mathematical induction Structural induction Recursive definition Naive set theory Element (mathematics) Ur-element Singleton (mathematics) Simple theorems in the algebra of sets Algebra of sets Power set Empty set Non-empty set Empty function Universe (mathematics) Axiomatization Axiomatic system Axiom schema Axiomatic method Formal system Mathematical proof Direct proof Reductio ad absurdum Proof by exhaustion Constructive proof Nonconstructive proof Tautology Consistency proof Arithmetization of analysis Foundations of mathematics Formal language Principia Mathematica Hilbert's program Impredicative Definable real number Algebraic logic Boolean algebra (logic) Dialectica space categorical logic Model theory Finite model theory Descriptive complexity theory Model checking Trakhtenbrot's theorem Computable model theory Tarski's exponential function problem Undecidable problem Institutional model theory Institution (computer science) Non-standard analysis Non-standard calculus Hyperinteger Hyperreal number Transfer principle Overspill Elementary Calculus: An Infinitesimal Approach Criticism of non-standard analysis Standard part function Set theory Forcing (mathematics) Boolean-valued model Kripke semantics General frame Predicate logic First-order logic Infinitary logic Many-sorted logic Higher-order logic Lindström quantifier Second-order logic Soundness theorem Gödel's completeness theorem Original proof of Gödel's completeness theorem Compactness theorem Löwenheim–Skolem theorem Skolem's paradox Gödel's incompleteness theorems Structure (mathematical logic) Interpretation (logic) Substructure (mathematics) Elementary substructure Skolem hull Non-standard model Atomic model (mathematical logic) Prime model Saturate" https://en.wikipedia.org/wiki/Pointwise,"In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value of some function An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise. Pointwise operations Formal definition A binary operation on a set can be lifted pointwise to an operation on the set of all functions from to as follows: Given two functions and , define the function by Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity. Examples where . See also pointwise product, and scalar. An example of an operation on functions which is not pointwise is convolution. Properties Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain. If is some algebraic structure, the set of all functions to the carrier set of can be turned into an algebraic structure of the same type in an analogous way. Componentwise operations Componentwise operations are usually defined on vectors, where vectors are elements of the set for some natural number and some field . If we denote the -th component of any vector as , then componentwise addition is . Componentwise operations can be defined on matrices. Matrix addition, where is a componentwise operation while matrix multiplication is not. A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector corresponds to the function such that , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors. Pointwise relations In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions A → B ca" https://en.wikipedia.org/wiki/Fourth%20dimension%20in%20art,"New possibilities opened up by the concept of four-dimensional space (and difficulties involved in trying to visualize it) helped inspire many modern artists in the first half of the twentieth century. Early Cubists, Surrealists, Futurists, and abstract artists took ideas from higher-dimensional mathematics and used them to radically advance their work. Early influence French mathematician Maurice Princet was known as ""le mathématicien du cubisme"" (""the mathematician of cubism""). An associate of the School of Paris—a group of avant-gardists including Pablo Picasso, Guillaume Apollinaire, Max Jacob, Jean Metzinger, and Marcel Duchamp—Princet is credited with introducing the work of Henri Poincaré and the concept of the ""fourth dimension"" to the cubists at the Bateau-Lavoir during the first decade of the 20th century. Princet introduced Picasso to Esprit Jouffret's Traité élémentaire de géométrie à quatre dimensions (Elementary Treatise on the Geometry of Four Dimensions, 1903), a popularization of Poincaré's Science and Hypothesis in which Jouffret described hypercubes and other complex polyhedra in four dimensions and projected them onto the two-dimensional page. Picasso's Portrait of Daniel-Henry Kahnweiler in 1910 was an important work for the artist, who spent many months shaping it. The portrait bears similarities to Jouffret's work and shows a distinct movement away from the Proto-Cubist fauvism displayed in Les Demoiselles d'Avignon, to a more considered analysis of space and form. Early cubist Max Weber wrote an article entitled ""In The Fourth Dimension from a Plastic Point of View"", for Alfred Stieglitz's July 1910 issue of Camera Work. In the piece, Weber states, ""In plastic art, I believe, there is a fourth dimension which may be described as the consciousness of a great and overwhelming sense of space-magnitude in all directions at one time, and is brought into existence through the three known measurements."" Another influence on the School of Paris" https://en.wikipedia.org/wiki/Real-time%20path%20planning,"Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles. These methods are different from something like a Roomba robot vacuum as the Roomba may be able to adapt to dynamic obstacles but it does not have a set target. A better example would be Embark self-driving semi-trucks that have a set target location and can also adapt to changing environments. The targets of path planning algorithms are not limited to locations alone. Path planning methods can also create plans for stationary robots to change their poses. An example of this can be seen in various robotic arms, where path planning allows the robotic system to change its pose without colliding with itself. As a subset of motion planning, it is an important part of robotics as it allows robots to find the optimal path to a target. This ability to find an optimal path also plays an important role in other fields such as video games and gene sequencing. Concepts In order to create a path from a target point to a goal point there must be classifications about the various areas within the simulated environment. This allows a path to be created in a 2D or 3D space where the robot can avoid obstacles. Work Space The work space is an environment that contains the robot and various obstacles. This environment can be either 2-dimensional or 3-dimensional. Configuration Space The configuration of a robot is determined by its current position and pose. The configuration space is the set of all configurations of the robot. By containing all the possible configurations of the robot, it also represents all transformations that can be applied to the robot. Within the configuration sets there are additiona" https://en.wikipedia.org/wiki/Real-time%20clock,"A real-time clock (RTC) is an electronic device (most often in the form of an integrated circuit) that measures the passage of time. Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time of day. Terminology The term real-time clock is used to avoid confusion with ordinary hardware clocks which are only signals that govern digital electronics, and do not count time in human units. RTC should not be confused with real-time computing, which shares its three-letter acronym but does not directly relate to time of day. Purpose Although keeping time can be done without an RTC, using one has benefits: Low power consumption (important when running from alternate power) Frees the main system for time-critical tasks Sometimes more accurate than other methods A GPS receiver can shorten its startup time by comparing the current time, according to its RTC, with the time at which it last had a valid signal. If it has been less than a few hours, then the previous ephemeris is still usable. Some motherboards are made without real time clocks. The real time clock is omitted either out of the desire to save money. Power source RTCs often have an alternate source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a lithium battery in older systems, but some newer systems use a supercapacitor, because they are rechargeable and can be soldered. The alternate power source can also supply power to battery backed RAM. Timing Most RTCs use a crystal oscillator, but some have the option of using the power line frequency. The crystal frequency is usually 32.768 kHz, the same frequency used in quartz clocks and watches. Being exactly 215 cycles per second, it is a convenient rate to use with simple binary counter circuits. The low frequency saves power, while remain" https://en.wikipedia.org/wiki/Food%20processing,"Food processing is the transformation of agricultural products into food, or of one form of food into other forms. Food processing takes many forms, from grinding grain into raw flour, home cooking, and complex industrial methods used in the making of convenience foods. Some food processing methods play important roles in reducing food waste and improving food preservation, thus reducing the total environmental impact of agriculture and improving food security. The Nova classification groups food according to different food processing techniques. Primary food processing is necessary to make most foods edible while secondary food processing turns ingredients into familiar foods, such as bread. Tertiary food processing results in ultra-processed foods and has been widely criticized for promoting overnutrition and obesity, containing too much sugar and salt, too little fiber, and otherwise being unhealthful in respect to dietary needs of humans and farm animals. Processing levels Primary food processing Primary food processing turns agricultural products, such as raw wheat kernels or livestock, into something that can eventually be eaten. This category includes ingredients that are produced by ancient processes such as drying, threshing, winnowing and milling grain, shelling nuts, and butchering animals for meat. It also includes deboning and cutting meat, freezing and smoking fish and meat, extracting and filtering oils, canning food, preserving food through food irradiation, and candling eggs, as well as homogenizing and pasteurizing milk. Contamination and spoilage problems in primary food processing can lead to significant public health threats, as the resulting foods are used so widely. However, many forms of processing contribute to improved food safety and longer shelf life before the food spoils. Commercial food processing uses control systems such as hazard analysis and critical control points (HACCP) and failure mode and effects analysis (FMEA) to " https://en.wikipedia.org/wiki/Type%20%28biology%29,"In biology, a type is a particular specimen (or in some cases a group of specimens) of an organism to which the scientific name of that organism is formally associated. In other words, a type is an example that serves to anchor or centralizes the defining features of that particular taxon. In older usage (pre-1900 in botany), a type was a taxon rather than a specimen. A taxon is a scientifically named grouping of organisms with other like organisms, a set that includes some organisms and excludes others, based on a detailed published description (for example a species description) and on the provision of type material, which is usually available to scientists for examination in a major museum research collection, or similar institution. Type specimen According to a precise set of rules laid down in the International Code of Zoological Nomenclature (ICZN) and the International Code of Nomenclature for algae, fungi, and plants (ICN), the scientific name of every taxon is almost always based on one particular specimen, or in some cases specimens. Types are of great significance to biologists, especially to taxonomists. Types are usually physical specimens that are kept in a museum or herbarium research collection, but failing that, an image of an individual of that taxon has sometimes been designated as a type. Describing species and appointing type specimens is part of scientific nomenclature and alpha taxonomy. When identifying material, a scientist attempts to apply a taxon name to a specimen or group of specimens based on their understanding of the relevant taxa, based on (at least) having read the type description(s), preferably also based on an examination of all the type material of all of the relevant taxa. If there is more than one named type that all appear to be the same taxon, then the oldest name takes precedence and is considered to be the correct name of the material in hand. If on the other hand, the taxon appears never to have been named at all, th" https://en.wikipedia.org/wiki/Perceived%20performance,"Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects. The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providing a visual cue to let them know the system is handling their request. In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance at the cost of marginally decreasing real performance. For example, drawing and refreshing a progress bar while loading a file satisfies the user who is watching, but steals time from the process that is actually loading the file, but usually this is only a very small amount of time. All such techniques must exploit the inability of the user to accurately judge real performance, or they would be considered detrimental to performance. Techniques for improving perceived performance may include more than just decreasing the delay between the user's request and visual feedback. Sometimes an increase in delay can be perceived as a performance improvement, such as when a variable controlled by the user is set to a running average of the users input. This can give the impression of smoother motion, but the controlled variable always reaches the desired value a bit late. Since it smooths out hi-frequency jitter, when the user is attempting to hold the value constant, they may feel like they are succeeding more readily. This kind of compromise would be appropriate for control of a sniper rifle in a video game. Another example may be doing trivial computation ahead of time rather than after a user triggers an action, such as pre-sorting a large list of data before a user w" https://en.wikipedia.org/wiki/Food%20science,"Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology. Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example. Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties. Definition The Institute of Food Technologists defines food science as ""the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public"". The textbook Food Science defines food science in simpler terms as ""the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing"". Disciplines Some of the subdisciplines of food science are described below. Food chemistry Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk. It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This" https://en.wikipedia.org/wiki/Conserved%20name,"A conserved name or nomen conservandum (plural nomina conservanda, abbreviated as nom. cons.) is a scientific name that has specific nomenclatural protection. That is, the name is retained, even though it violates one or more rules which would otherwise prevent it from being legitimate. Nomen conservandum is a Latin term, meaning ""a name to be conserved"". The terms are often used interchangeably, such as by the International Code of Nomenclature for Algae, Fungi, and Plants (ICN), while the International Code of Zoological Nomenclature favours the term ""conserved name"". The process for conserving botanical names is different from that for zoological names. Under the botanical code, names may also be ""suppressed"", nomen rejiciendum (plural nomina rejicienda or nomina utique rejicienda, abbreviated as nom. rej.), or rejected in favour of a particular conserved name, and combinations based on a suppressed name are also listed as “nom. rej.”. Botany Conservation In botanical nomenclature, conservation is a nomenclatural procedure governed by Article 14 of the ICN. Its purpose is ""to avoid disadvantageous nomenclatural changes entailed by the strict application of the rules, and especially of the principle of priority [...]"" (Art. 14.1). Conservation is possible only for names at the rank of family, genus or species. It may effect a change in original spelling, type, or (most commonly) priority. Conserved spelling (orthographia conservanda, orth. cons.) allows spelling usage to be preserved even if the name was published with another spelling: Euonymus (not Evonymus), Guaiacum (not Guajacum), etc. (see orthographical variant). Conserved types (typus conservandus, typ. cons.) are often made when it is found that a type in fact belongs to a different taxon from the description, when a name has subsequently been generally misapplied to a different taxon, or when the type belongs to a small group separate from the monophyletic bulk of a taxon. Conservation of a nam" https://en.wikipedia.org/wiki/List%20of%20algebraic%20number%20theory%20topics,"This is a list of algebraic number theory topics. Basic topics These topics are basic to the field, either as prototypical examples, or as basic objects of study. Algebraic number field Gaussian integer, Gaussian rational Quadratic field Cyclotomic field Cubic field Biquadratic field Quadratic reciprocity Ideal class group Dirichlet's unit theorem Discriminant of an algebraic number field Ramification (mathematics) Root of unity Gaussian period Important problems Fermat's Last Theorem Class number problem for imaginary quadratic fields Stark–Heegner theorem Heegner number Langlands program General aspects Different ideal Dedekind domain Splitting of prime ideals in Galois extensions Decomposition group Inertia group Frobenius automorphism Chebotarev's density theorem Totally real field Local field p-adic number p-adic analysis Adele ring Idele group Idele class group Adelic algebraic group Global field Hasse principle Hasse–Minkowski theorem Galois module Galois cohomology Brauer group Class field theory Class field theory Abelian extension Kronecker–Weber theorem Hilbert class field Takagi existence theorem Hasse norm theorem Artin reciprocity Local class field theory Iwasawa theory Iwasawa theory Herbrand–Ribet theorem Vandiver's conjecture Stickelberger's theorem Euler system p-adic L-function Arithmetic geometry Arithmetic geometry Complex multiplication Abelian variety of CM-type Chowla–Selberg formula Hasse–Weil zeta function Mathematics-related lists" https://en.wikipedia.org/wiki/Cactus%20graph,"In graph theory, a cactus (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for nontrivial cacti) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle. Properties Cacti are outerplanar graphs. Every pseudotree is a cactus. A nontrivial graph is a cactus if and only if every block is either a simple cycle or a single edge. The family of graphs in which each component is a cactus is downwardly closed under graph minor operations. This graph family may be characterized by a single forbidden minor, the four-vertex diamond graph formed by removing an edge from the complete graph K4. Triangular cactus A triangular cactus is a special type of cactus graph such that each cycle has length three and each edge belongs to a cycle. For instance, the friendship graphs, graphs formed from a collection of triangles joined together at a single shared vertex, are triangular cacti. As well as being cactus graphs the triangular cacti are also block graphs and locally linear graphs. Triangular cactuses have the property that they remain connected if any matching is removed from them; for a given number of vertices, they have the fewest possible edges with this property. Every tree with an odd number of vertices may be augmented to a triangular cactus by adding edges to it, giving a minimal augmentation with the property of remaining connected after the removal of a matching. The largest triangular cactus in any graph may be found in polynomial time using an algorithm for the matroid parity problem. Since triangular cactus graphs are planar graphs, the largest triangular cactus can be used as an approximation to the largest planar subgraph, an important subproblem in planarization. As an approximation algorithm, this method has approximation ratio 4/9, the best known for the maximum p" https://en.wikipedia.org/wiki/Hardware-based%20encryption,"Hardware-based encryption is the use of computer hardware to assist software, or sometimes replace software, in the process of data encryption. Typically, this is implemented as part of the processor's instruction set. For example, the AES encryption algorithm (a modern cipher) can be implemented using the AES instruction set on the ubiquitous x86 architecture. Such instructions also exist on the ARM architecture. However, more unusual systems exist where the cryptography module is separate from the central processor, instead being implemented as a coprocessor, in particular a secure cryptoprocessor or cryptographic accelerator, of which an example is the IBM 4758, or its successor, the IBM 4764. Hardware implementations can be faster and less prone to exploitation than traditional software implementations, and furthermore can be protected against tampering. History Prior to the use of computer hardware, cryptography could be performed through various mechanical or electro-mechanical means. An early example is the Scytale used by the Spartans. The Enigma machine was an electro-mechanical system cipher machine notably used by the Germans in World War II. After World War II, purely electronic systems were developed. In 1987 the ABYSS (A Basic Yorktown Security System) project was initiated. The aim of this project was to protect against software piracy. However, the application of computers to cryptography in general dates back to the 1940s and Bletchley Park, where the Colossus computer was used to break the encryption used by German High Command during World War II. The use of computers to encrypt, however, came later. In particular, until the development of the integrated circuit, of which the first was produced in 1960, computers were impractical for encryption, since, in comparison to the portable form factor of the Enigma machine, computers of the era took the space of an entire building. It was only with the development of the microcomputer that computer encr" https://en.wikipedia.org/wiki/Index%20of%20accounting%20articles,"This page is an index of accounting topics. A Accounting ethics - Accounting information system - Accounting research - Activity-Based Costing - Assets B Balance sheet - Big Four auditors - Bond - Bookkeeping - Book value C Cash-basis accounting - Cash-basis versus accrual-basis accounting - Cash flow statement - Certified General Accountant - Certified Management Accountants - Certified Public Accountant - Chartered accountant - Chart of accounts - Common stock - Comprehensive income - Construction accounting - Convention of conservatism - Convention of disclosure - Cost accounting - Cost of capital - Cost of goods sold - Creative accounting - Credit - Credit note - Current asset - Current liability D Debitcapital reserve - Debit note - Debt - Deficit (disambiguation) - Depreciation - Diluted earnings per share - Dividend - Double-entry bookkeeping system - Dual aspect E E-accounting - EBIT - EBITDA - Earnings per share - Engagement Letter - Entity concept - Environmental accounting - Expense - Equity - Equivalent Annual Cost F Financial Accounting Standards Board - Financial accountancy - Financial audit - Financial reports - Financial statements - Fixed assets - Fixed assets management - Forensic accounting - Fraud deterrence - Free cash flow - Fund accounting G Gain - General ledger - Generally Accepted Accounting Principles - Going concern - Goodwill - Governmental Accounting Standards Board H Historical cost - History of accounting I Income - Income statement - Institute of Chartered Accountants in England and Wales - Institute of Chartered Accountants of Scotland - Institute of Management Accountants - Intangible asset - Interest - Internal audit - International Accounting Standards Board - International Accounting Standards Committee - International Accounting Standards - International Federation of Accountants - International Financial Reporting Standards - Inventory - Investment - Invoices - Indian Accounting Standards J Job costing - Journal L " https://en.wikipedia.org/wiki/Scaffolding%20%28bioinformatics%29,"Scaffolding is a technique used in bioinformatics. It is defined as follows: Link together a non-contiguous series of genomic sequences into a scaffold, consisting of sequences separated by gaps of known length. The sequences that are linked are typically contiguous sequences corresponding to read overlaps.When creating a draft genome, individual reads of DNA are second assembled into contigs, which, by the nature of their assembly, have gaps between them. The next step is to then bridge the gaps between these contigs to create a scaffold. This can be done using either optical mapping or mate-pair sequencing. Assembly software The sequencing of the Haemophilus influenzae genome marked the advent of scaffolding. That project generated a total of 140 contigs, which were oriented and linked using paired end reads. The success of this strategy prompted the creation of the software, Grouper, which was included in genome assemblers. Until 2001, this was the only scaffolding software. After the Human Genome Project and Celera proved that it was possible to create a large draft genome, several other similar programs were created. Bambus was created in 2003 and was a rewrite of the original grouper software, but afforded researchers the ability to adjust scaffolding parameters. This software also allowed for optional use of other linking data, such as contig order in a reference genome. Algorithms used by assembly software are very diverse, and can be classified as based on iterative marker ordering, or graph based. Graph based applications have the capacity to order and orient over 10,000 markers, compared to the maximum 3000 markers capable of iterative marker applications. Algorithms can be further classified as greedy, non greedy, conservative, or non conservative. Bambus uses a greedy algorithm, defined as such because it joins together contigs with the most links first. The algorithm used by Bambus 2 removes repetitive contigs before orienting and ordering them in" https://en.wikipedia.org/wiki/Table%20of%20Newtonian%20series,"In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence written in the form where is the binomial coefficient and is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus. List The generalized binomial theorem gives A proof for this identity can be obtained by showing that it satisfies the differential equation The digamma function: The Stirling numbers of the second kind are given by the finite sum This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0: A related identity forms the basis of the Nörlund–Rice integral: where is the Gamma function and is the Beta function. The trigonometric functions have umbral identities: and The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial . The first few terms of the sin series are which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn. In analytic number theory it is of interest to sum where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as The general relation gives the Newton series where is the Hurwitz zeta function and the Bernoulli polynomial. The series does not converge, the identity holds formally. Another identity is which converges for . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent) See also Binomial transform List of factorial and binomial topics Nörlund–Rice integral Carlson's theorem" https://en.wikipedia.org/wiki/Tuple,"In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called respectively a singleton and an ordered pair. Tuple may be formally defined from ordered pairs by recurrence by starting from ordered pairs; indeed, a -tuple can be identified with the ordered pair of its first elements and its th element. Tuples are usually written by listing the elements within parentheses """", separated by a comma and a space; for example, denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets ""[ ]"" or angle brackets ""⟨ ⟩"". Braces ""{ }"" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term tuple can often occur when discussing other mathematical objects, such as vectors. In computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples. Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy. Etymology The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are" https://en.wikipedia.org/wiki/List%20of%20wireless%20sensor%20nodes,"A sensor node, also known as a mote (chiefly in North America), is a node in a sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. A mote is a node but a node is not always a mote. List of Wireless Sensor Nodes See also Wireless sensor network Sensor node Mesh networking Sun SPOT Embedded computer Embedded system Mobile ad hoc network (MANETS) Smartdust Sensor Web" https://en.wikipedia.org/wiki/Quantum%20biology,"Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative. History Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an ""aperiodic crystal"" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by ""quantum leaps"". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argu" https://en.wikipedia.org/wiki/Shared%20memory,"In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory. In hardware In computer hardware, shared memory refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiprocessor computer system. Shared memory systems may use: uniform memory access (UMA): all the processors share the physical memory uniformly; non-uniform memory access (NUMA): memory access time depends on the memory location relative to a processor; cache-only memory architecture (COMA): the local memories for the processors at each node is used as cache instead of as actual main memory. A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to the same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications: access time degradation: when several processors try to access the same memory location it causes contention. Trying to access nearby memory locations may cause false sharing. Shared memory computers cannot scale very well. Most of them have ten or fewer processors; lack of data coherence: whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data. Such cache coherence protocols can, when they work well, provide extremely hig" https://en.wikipedia.org/wiki/Thermotolerance,"Thermotolerance is the ability of an organism to survive high temperatures. An organism's natural tolerance of heat is their basal thermotolerance. Meanwhile, acquired thermotolerance is defined as an enhanced level of thermotolerance after exposure to a heat stress. In plants Multiple factors contribute to thermotolerance including signaling molecules like abscisic acid, salicylic acid, and pathways like the ethylene signaling pathway and heat stress response pathway. The various heat stress response pathways enhance thermotolerance. The heat stress response in plants is mediated by heat shock transcription factors (HSF) and is well conserved across eukaryotes. HSFs are essential in plants’ ability to both sense and respond to stress. The HSFs, which are divided into three families (A, B, and C), encode the expression of heat shock proteins (HSP). Past studies have found that transcriptional activators HsfA1 and HsfB1 are the main positive regulators of heat stress response genes in Arabidopsis thaliana. The general pathway to thermotolerance is characterized by sensing of heat stress, activation of HSFs, upregulation of heat response, and return to the non-stressed state. In 2011, while studying heat stress A. thaliana, Ikeda et al. concluded that the early response is regulated by HsfA1 and the extended response is regulated by HsfA2. They used RT-PCR to analyze the expression of HS-inducible genes of mutant (ectopic and nonfunctional HsfB1) and wild type plants. Plants with mutant HsfB1 had lower acquired thermotolerance, based on both lower expression of heat stress genes and visibly altered phenotypes. With these results they concluded that class A HSFs positively regulated the heat stress response while class B HSFs repressed the expression of HSF genes. Therefore, both were necessary for plants to return to non-stressed conditions and acquired thermotolerance. In animals" https://en.wikipedia.org/wiki/Eyeball%20network,"Eyeball network is a slang term used by network engineers and architects that refers to an access network whose primary users use the network to “look at things” (browse the Internet, read email, etc.) and consume content, as opposed to a network that may be used primarily to generate its own data, or “content networks/providers”. The term “eyeball network” is often overheard in conversations and seen in articles that discuss peering relationships between other networks, as well as net neutrality issues. An example of an eyeball network would be any given ISP that provides internet connectivity to end-users – The ISP may peer with Google (which is a content provider) where the end users consume content serviced/provided by Google, in this case the ISP is just an “eyeball network” providing a means for the end user to reach Google provided actual content. However, it is to be noted that not all ISPs are eyeball networks, they can be pure transit providers. With Tier 2 networks and lower, they can serve as both an eyeball network and a transit provider, depending on their business model. In the modern day ecosystem where peering is given priority, the lines are blurred between the different types of networks as ultimately any given network must be able to reach every other given network on the internet at large." https://en.wikipedia.org/wiki/Generalized%20signal%20averaging,"Within signal processing, in many cases only one image with noise is available, and averaging is then realized in a local neighbourhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data. This assumption is clearly violated at locations of image edges, and edge blurring is a direct consequence of violating the assumption. Description Averaging is a special case of discrete convolution. For a 3 by 3 neighbourhood, the convolution mask M is: The significance of the central pixel may be increased, as it approximates the properties of noise with a Gaussian probability distribution: A suitable page for beginners about matrices is at: https://web.archive.org/web/20060819141930/http://www.gamedev.net/reference/programming/features/imageproc/page2.asp The whole article starts on page: https://web.archive.org/web/20061019072001/http://www.gamedev.net/reference/programming/features/imageproc/" https://en.wikipedia.org/wiki/Molluscivore,"A molluscivore is a carnivorous animal that specialises in feeding on molluscs such as gastropods, bivalves, brachiopods and cephalopods. Known molluscivores include numerous predatory (and often cannibalistic) molluscs, (e.g.octopuses, murexes, decollate snails and oyster drills), arthropods such as crabs and firefly larvae, and, vertebrates such as fish, birds and mammals. Molluscivory is performed in a variety ways with some animals highly adapted to this method of feeding behaviour. A similar behaviour, durophagy, describes the feeding of animals that consume hard-shelled or exoskeleton bearing organisms, such as corals, shelled molluscs, or crabs. Description Molluscivory can be performed in several ways: In some cases, the mollusc prey are simply swallowed entire, including the shell, whereupon the prey is killed through suffocation and or exposure to digestive enzymes. Only cannibalistic sea slugs, snail-eating cone shells of the taxon Coninae, and some sea anemones use this method. One method, used especially by vertebrate molluscivores, is to break the shell, either by exerting force on the shell until it breaks, often by biting the shell, like with oyster crackers, mosasaurs, and placodonts, or hammering at the shell, e.g. oystercatchers and crabs, or by simply dashing the mollusc on a rock (e.g. song thrushes, gulls, and sea otters). Another method is to remove the shell from the prey. Molluscs are attached to their shell by strong muscular ligaments, making the shell's removal difficult. Molluscivorous birds, such as oystercatchers and the Everglades snail kite, insert their elongate beak into the shell to sever these attachment ligaments, facilitating removal of the prey. The carnivorous terrestrial pulmonate snail known as the ""decollate snail"" (""decollate"" being a synonym for ""decapitate"") uses a similar method: it reaches into the opening of the prey's shell and bites through the muscles in the prey's neck, whereupon it immediately begins d" https://en.wikipedia.org/wiki/Digital%20room%20correction,"Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system. History The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs. Operation The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through" https://en.wikipedia.org/wiki/Code,"In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. Theory In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords a" https://en.wikipedia.org/wiki/DeWitt%20notation,"Physics often deals with classical models where the dynamical variables are a collection of functions {φα}α over a d-dimensional space/spacetime manifold M where α is the ""flavor"" index. This involves functionals over the φs, functional derivatives, functional integrals, etc. From a functional point of view this is equivalent to working with an infinite-dimensional smooth manifold where its points are an assignment of a function for each α, and the procedure is in analogy with differential geometry where the coordinates for a point x of the manifold M are φα(x). In the DeWitt notation''' (named after theoretical physicist Bryce DeWitt), φα(x) is written as φi where i is now understood as an index covering both α and x. So, given a smooth functional A, A,i stands for the functional derivative as a functional of φ''. In other words, a ""1-form"" field over the infinite dimensional ""functional manifold"". In integrals, the Einstein summation convention is used. Alternatively," https://en.wikipedia.org/wiki/Food%20Valley,"Food Valley is a region in the Netherlands where international food companies, research institutes, and Wageningen University and Research Centre are concentrated. The Food Valley area is the home of a large number of food multinationals and within the Food Valley about 15,000 professionals are active in food related sciences and technological development. Far more are involved in the manufacturing of food products. Food Valley, with the city of Wageningen as its center, is intended to form a dynamic heart of knowledge for the international food industry. Within this region, Foodvalley NL is intended to create conditions so that food manufacturers and knowledge institutes can work together in developing new and innovating food concepts. Current research about the Food Valley The Food Valley as a region has been the subject of study by several human geographers. Even before the Food Valley was established as an organisation in 2004 and as a region in 2011 Frank Kraak and Frits Oevering made a SWOT analysis of the region using an Evolutionary economics framework and compared it with similar regions in Canada, Denmark, Italy and Sweden. A similar study was done by Floris Wieberdink. The study utilised Geomarketing concepts in the WERV, the predecessor of the Regio Food Valley. Geijer and Van der Velden studied the economic development of the Regio Food Valley using statistical data. Discussion The research performed in the Food Valley has generated some discussion about the influence of culture on economic growth. Wieberdink argued that culture and habitat are not spatially bounded, but historically. More recently a study about the Food Valley argued that culture and habitat are in fact spatially bounded. Both studies, however, recommend the Regio Food Valley to promote its distinct culture. See also" https://en.wikipedia.org/wiki/Lemniscate,"In algebraic geometry, a lemniscate is any of several figure-eight or -shaped curves. The word comes from the Latin meaning ""decorated with ribbons"", from the Greek meaning ""ribbon"", or which alternatively may refer to the wool from which the ribbons were made. Curves that have been called a lemniscate include three quartic plane curves: the hippopede or lemniscate of Booth, the lemniscate of Bernoulli, and the lemniscate of Gerono. The study of lemniscates (and in particular the hippopede) dates to ancient Greek mathematics, but the term ""lemniscate"" for curves of this type comes from the work of Jacob Bernoulli in the late 17th century. History and examples Lemniscate of Booth The consideration of curves with a figure-eight shape can be traced back to Proclus, a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or ""hippopede"" in Greek. The name ""lemniscate of Booth"" for this curve dates to its study by the 19th-century mathematician James Booth. The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth. Lemniscate of Bernoulli In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is " https://en.wikipedia.org/wiki/Facultative,"Facultative means ""optional"" or ""discretionary"" (antonym obligate), used mainly in biology in phrases such as: Facultative (FAC), facultative wetland (FACW), or facultative upland (FACU): wetland indicator statuses for plants Facultative anaerobe, an organism that can use oxygen but also has anaerobic methods of energy production. It can survive in either environment Facultative biotroph, an organism, often a fungus, that can live as a saprotroph but also form mutualisms with other organisms at different times of its life cycle. Facultative biped, an animal that is capable of walking or running on two legs as well as walking or running on four limbs or more, as appropriate Facultative carnivore, a carnivore that does not depend solely on animal flesh for food but also can subsist on non-animal food. Compare this with the term omnivore Facultative heterochromatin, tightly packed but non-repetitive DNA in the form of Heterochromatin, but which can lose its condensed structure and become transcriptionally active Facultative lagoon, a type of stabilization pond used in biological treatment of industrial and domestic wastewater Facultative parasite, a parasite that can complete its life cycle without depending on a host Facultative photoperiodic plant, a plant that will eventually flower regardless of night length but is more likely to flower under appropriate light conditions. Facultative saprophyte, lives on dying, rather than dead, plant material facultative virus See also (antonym) Obligate Opportunism (Biology) Biology terminology" https://en.wikipedia.org/wiki/Trust%20on%20first%20use,"Trust on first use (TOFU), or trust upon first use (TUFU), is an authentication scheme used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the endpoint's identifier, usually either the public identity key of the endpoint, or the fingerprint of said identity key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database. If in a subsequent connection a different identifier is received from the opposing endpoint, the client software will consider it to be untrusted. TOFU implementations In the SSH protocol, most client software (though not all) will, upon connecting to a not-yet-trusted server, display the server's public key fingerprint, and prompt the user to verify they have indeed authenticated it using an authenticated channel. The client will then record the trust relationship into its trust database. New identifier will cause a blocking warning that requires manual removal of the currently stored identifier. The XMPP client Conversations uses Blind Trust Before Verification, where all identifiers are blindly trusted until the user demonstrates will and ability to authenticate endpoints by scanning the QR-code representation of the identifier. After the first identifier has been scanned, the client will display a shield symbol for messages from authenticated endpoints, and red background for others. In Signal the endpoints initially blindly trust the identifier and display non-blocking warnings when it changes. The identifier can be verified either by scanning a QR-code, or by exchanging the decimal representation of the identifie" https://en.wikipedia.org/wiki/Square%20root%20of%202,"The square root of 2 (approximately 1.4142) is a positive real number that, when multiplied by itself, equals the number 2. It may be written in mathematics as or . It is an algebraic number, and therefore not a transcendental number. Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property. Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem. It was probably the first number known to be irrational. The fraction (≈ 1.4142857) is sometimes used as a good rational approximation with a reasonably small denominator. Sequence in the On-Line Encyclopedia of Integer Sequences consists of the digits in the decimal expansion of the square root of 2, here truncated to 65 decimal places: History The Babylonian clay tablet YBC 7289 (–1600 BC) gives an approximation of in four sexagesimal figures, , which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of : Another early approximation is given in ancient Indian mathematical texts, the Sulbasutras (–200 BC), as follows: Increase the length [of the side] by its third and this third by its own fourth less the thirty-fourth part of that fourth. That is, This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, which can be derived from the continued fraction expansion of . Despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation. Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official s" https://en.wikipedia.org/wiki/Minimal%20counterexample,"In mathematics, a minimal counterexample is the smallest example which falsifies a claim, and a proof by minimal counterexample is a method of proof which combines the use of a minimal counterexample with the ideas of proof by induction and proof by contradiction. More specifically, in trying to prove a proposition P, one first assumes by contradiction that it is false, and that therefore there must be at least one counterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexample C that is minimal. In regard to the argument, C is generally something quite hypothetical (since the truth of P excludes the possibility of C), but it may be possible to argue that if C existed, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the proposition P is indeed true. If the form of the contradiction is that we can derive a further counterexample D, that is smaller than C in the sense of the working hypothesis of minimality, then this technique is traditionally called proof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof. The assumption that if there is a counterexample, there is a minimal counterexample, is based on a well-ordering of some kind. The usual ordering on the natural numbers is clearly possible, by the most usual formulation of mathematical induction; but the scope of the method can include well-ordered induction of any kind. Examples The minimal counterexample method has been much used in the classification of finite simple groups. The Feit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was based on the hypothesis of some, and therefore some minimal, simple group G of odd order. Every proper subgroup of G can be assumed a solvable group, meaning that m" https://en.wikipedia.org/wiki/Set-builder%20notation,"In set theory and its applications to logic, mathematics, and computer science, set-builder notation is a mathematical notation for describing a set by enumerating its elements, or stating the properties that its members must satisfy. Defining sets by properties is also known as set comprehension, set abstraction or as defining a set's intension. Sets defined by enumeration A set can be described directly by enumerating all of its elements between curly brackets, as in the following two examples: is the set containing the four numbers 3, 7, 15, and 31, and nothing else. is the set containing , , and , and nothing else (there is no order among the elements of a set). This is sometimes called the ""roster method"" for specifying a set. When it is desired to denote a set that contains elements from a regular sequence, an ellipsis notation may be employed, as shown in the next examples: is the set of integers between 1 and 100 inclusive. is the set of natural numbers. is the set of all integers. There is no order among the elements of a set (this explains and validates the equality of the last example), but with the ellipses notation, we use an ordered sequence before (or after) the ellipsis as a convenient notational vehicle for explaining which elements are in a set. The first few elements of the sequence are shown, then the ellipses indicate that the simplest interpretation should be applied for continuing the sequence. Should no terminating value appear to the right of the ellipses, then the sequence is considered to be unbounded. In general, denotes the set of all natural numbers such that . Another notation for is the bracket notation . A subtle special case is , in which is equal to the empty set . Similarly, denotes the set of all for . In each preceding example, each set is described by enumerating its elements. Not all sets can be described in this way, or if they can, their enumeration may be too long or too complicated to be useful. " https://en.wikipedia.org/wiki/Intel%20HEX,"Intel hexadecimal object file format, Intel hex format or Intellec Hex is a file format that conveys binary information in ASCII text form, making it possible to store on non-binary media such as paper tape, punch cards, etc., to display on text terminals or be printed on line-oriented printers. The format is commonly used for programming microcontrollers, EPROMs, and other types of programmable logic devices and hardware emulators. In a typical application, a compiler or assembler converts a program's source code (such as in C or assembly language) to machine code and outputs it into a HEX file. Some also use it as a container format holding packets of stream data. Common file extensions used for the resulting files are .HEX or .H86. The HEX file is then read by a programmer to write the machine code into a PROM or is transferred to the target system for loading and execution. History The Intel hex format was originally designed for Intel's Intellec Microcomputer Development Systems (MDS) in 1973 in order to load and execute programs from paper tape. It was also used to specify memory contents to Intel for ROM production, which previously had to be encoded in the much less efficient BNPF (Begin-Negative-Positive-Finish) format. In 1973, Intel's ""software group"" consisted only of Bill Byerly and Ken Burget, and Gary Kildall as an external consultant doing business as Microcomputer Applications Associates (MAA) and founding Digital Research in 1974. Beginning in 1975, the format was utilized by Intellec Series II ISIS-II systems supporting diskette drives, with files using the file extension HEX. Many PROM and EPROM programming devices accept this format. Format Intel HEX consists of lines of ASCII text that are separated by line feed or carriage return characters or both. Each text line contains uppercase hexadecimal characters that encode multiple binary numbers. The binary numbers may represent data, memory addresses, or other values, depending on their position" https://en.wikipedia.org/wiki/Substrate%20coupling,"In an integrated circuit, a signal can couple from one node to another via the substrate. This phenomenon is referred to as substrate coupling or substrate noise coupling. The push for reduced cost, more compact circuit boards, and added customer features has provided incentives for the inclusion of analog functions on primarily digital MOS integrated circuits (ICs) forming mixed-signal ICs. In these systems, the speed of digital circuits is constantly increasing, chips are becoming more densely packed, interconnect layers are added, and analog resolution is increased. In addition, recent increase in wireless applications and its growing market are introducing a new set of aggressive design goals for realizing mixed-signal systems. Here, the designer integrates radio frequency (RF) analog and base band digital circuitry on a single chip. The goal is to make single-chip radio frequency integrated circuits (RFICs) on silicon, where all the blocks are fabricated on the same chip. One of the advantages of this integration is low power dissipation for portability due to a reduction in the number of package pins and associated bond wire capacitance. Another reason that an integrated solution offers lower power consumption is that routing high-frequency signals off-chip often requires a 50Ω impedance match, which can result in higher power dissipation. Other advantages include improved high-frequency performance due to reduced package interconnect parasitics, higher system reliability, smaller package count, and higher integration of RF components with VLSI-compatible digital circuits. In fact, the single-chip transceiver is now a reality. The design of such systems, however, is a complicated task. There are two main challenges in realizing mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good on-chip passive elements such as high-Q inductors. The second challenging task, applicable to any mixed-signal IC and the subject of this chap" https://en.wikipedia.org/wiki/Background%20debug%20mode%20interface,"Background debug mode (BDM) interface is an electronic interface that allows debugging of embedded systems. Specifically, it provides in-circuit debugging functionality in microcontrollers. It requires a single wire and specialized electronics in the system being debugged. It appears in many Freescale Semiconductor products. The interface allows a Host to manage and query a target. Specialized hardware is required in the target device. No special hardware is required in the host; a simple bidirectional I/O pin is sufficient. I/O signals The signals used by BDM to communicate data to and from the target are initiated by the host processor. The host negates the transmission line, and then either Asserts the line sooner, to output a 1, Asserts the line later, to output a 0, Tri-states its output, allowing the target to drive the line. The host can sense a 1 or 0 as an input value. At the start of the next bit time, the host negates the transmission line, and the process repeats. Each bit is communicated in this manner. In other words, the increasing complexity of today's software and hardware designs is leading to some fresh approaches to debugging. Silicon manufacturers offer more and more on-chip debugging features for emulation of new processors. This capability, implemented in various processors under such names as background debug mode (BDM), JTAG and on-chip in-circuit emulation, puts basic debugging functions on the chip itself. With a BDM (1 wire interface) or JTAG (standard JTAG) debug port, you control and monitor the microcontroller solely through the stable on-chip debugging services. This debugging mode runs even when the target system crashes and enables developers to continue investigating the cause of the crash. Microcontroller application development A good development tool environment is important to reduce total development time and cost. Users want to debug their application program under conditions that imitate the actual setup of the" https://en.wikipedia.org/wiki/List%20of%20vector%20spaces%20in%20mathematics,"This is a list of vector spaces in abstract mathematics, by Wikipedia page. Banach space Besov space Bochner space Dual space Euclidean space Fock space Fréchet space Hardy space Hilbert space Hölder space LF-space Lp space Minkowski space Montel space Morrey–Campanato space Orlicz space Riesz space Schwartz space Sobolev space Tsirelson space Linear algebra Mathematics-related lists" https://en.wikipedia.org/wiki/QED%20vacuum,"The QED vacuum or quantum electrodynamic vacuum is the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When Planck's constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism. Another field-theoretic vacuum is the QCD vacuum of the Standard Model. Fluctuations The QED vacuum is subject to fluctuations about a dormant zero average-field condition; Here is a description of the quantum vacuum: Virtual particles It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle: (where and are energy and time variations, and the Planck constant divided by 2) arguing along the lines that the short lifetime of virtual particles allows the ""borrowing"" of large energies from the vacuum and thus permits particle generation for short times. This interpretation of the energy-time uncertainty relation is not universally accepted, however. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty determines a ""budget"" for borrowing energy . Another issue is the meaning of ""time"" in this relation, because energy and time (unlike position and momentum , for example) do not satisfy a canonical commutation relation (such as ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The many approaches to the energy-time uncertainty principle are a continuing subject of study. Quantization of the fields The Heisenberg uncertainty principle does not allow a particle to exist in a state in which the particle is simultaneously at a fixed location, say the origin of coordinates, and has also zero momentum. Instead the particle has a " https://en.wikipedia.org/wiki/ESD%20simulator,"An ESD simulator, also known as an ESD gun, is a handheld unit used to test the immunity of devices to electrostatic discharge (ESD). These simulators are used in special electromagnetic compatibility (EMC) laboratories. ESD pulses are fast, high-voltage pulses created when two objects with different electrical charges come into close proximity or contact. Recreating them in a test environment helps to verify that the device under test is immune to static electricity discharges. ESD testing is necessary to receive a CE mark, and for most suppliers of components for motor vehicles as part of required electromagnetic compatibility testing. It is often useful to automate these tests to eliminate the human factor. There are three distinct test models for electrostatic discharge: human-body, machine, and charged-devices models. The human-body model emulates the action of a human body discharging static electricity, the machine model simulates static discharge from a machine, and the charged-device model simulates the charging and discharging events that occur in production processes and equipment. Many ESD guns have interchangeable modules containing different discharge Networks or RC Modules (Specific resistance and capacitance values) to simulate different discharges. These modules typically slide into the handle of the pistol portion of the ESD simulator, much like loading some handguns. They change the characteristics of the waveshape discharged from the pistol and are called out in general standards like IEC 61000-4-2, SAE J113 and industry specific standards like ISO 10605. Resistance is referred to in ohms (Ω), capacitance is referred to in picofarad (pF or ""puff""). The most commonly used discharge network is for IEC 61000-4-2 and ISO 10605, expressed as 150pF/330Ω. There are over 50 combinations of resistance and capacitance depending on the standards and the applicable electronics. Test standards Standards that require ESD testing include: ISO 10605 Ford " https://en.wikipedia.org/wiki/Beyond%20CMOS,"Beyond CMOS refers to the possible future digital logic technologies beyond the CMOS scaling limits which limits device density and speeds due to heating effects. Beyond CMOS is the name of one of the 7 focus groups in ITRS 2.0 (2013) and in its successor, the International Roadmap for Devices and Systems. CPUs using CMOS were released from 1986 (e.g. 12 MHz Intel 80386). As CMOS transistor dimensions were shrunk the clock speeds also increased. Since about 2004 CMOS CPU clock speeds have leveled off at about 3.5 GHz. CMOS devices sizes continue to shrink – see Intel tick–tock and ITRS : 22 nanometer Ivy Bridge in 2012 first 14 nanometer processors shipped in Q4 2014. In May 2015, Samsung Electronics showed a 300 mm wafer of 10 nanometer FinFET chips. It is not yet clear if CMOS transistors will still work below 3 nm. See 3 nanometer. Comparisons of technology About 2010 the Nanoelectronic Research Initiative (NRI) studied various circuits in various technologies. Nikonov benchmarked (theoretically) many technologies in 2012, and updated it in 2014. The 2014 benchmarking included 11 electronic, 8 spintronic, 3 orbitronic, 2 ferroelectric, and 1 straintronics technology. The 2015 ITRS 2.0 report included a detailed chapter on Beyond CMOS, covering RAM and logic gates. Some areas of investigation Magneto-Electric Spin-Orbit logic tunnel junction devices, eg Tunnel field-effect transistor indium antimonide transistors carbon nanotube FET, eg CNT Tunnel field-effect transistor graphene nanoribbons molecular electronics spintronics — many variants future low-energy electronics technologies, ultra-low dissipation conduction paths, including topological materials exciton superfluids photonics and optical computing superconducting computing rapid single-flux quantum (RSFQ) Superconducting computing and RSFQ Superconducting computing includes several beyond-CMOS technologies that use superconducting devices, namely Josephson junctions, for electronic " https://en.wikipedia.org/wiki/Biologist,"A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer). Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans. In modern times, most biologists have one or more academic degrees such as a bachelor's degree plus an advanced degree like a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government. History Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells. Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics. In 1953, James D. Watson and Francis " https://en.wikipedia.org/wiki/OpenVNet,"The OpenVNet adds a Network Virtualization layer on top of the existing physical network and enables data center network administrators to tremendously simplify the creation and operation of multi-tenant networks. It is based on edge overlay network architecture and provides all the necessary components for network virtualization such as SDN controller, virtual switch, virtual router, and powerful APIs. The OpenVNet project started in April 2013. Almost part of the implementation had already done in the Wakame-vdc project in the beginning of 2012. See also Open vSwitch" https://en.wikipedia.org/wiki/Beetle%20%28ASIC%29,"The Beetle ASIC is an analog readout chip. It is developed for the LHCb experiment at CERN. Overview The chip integrates 128 channels with low-noise charge-sensitive pre-amplifiers and shapers. The pulse shape can be chosen such that it complies with LHCb specifications: a peaking time of 25 ns with a remainder of the peak voltage after 25 ns of less than 30%. A comparator per channel with configurable polarity provides a binary signal. Four adjacent comparator channels are being ORed and brought off chip via LVDS drivers. Either the shaper or comparator output is sampled with the LHC bunch-crossing frequency of 40 MHz into an analog pipeline. This ring buffer has a programmable latency of a maximum of 160 sampling intervals and an integrated derandomising buffer of 16 stages. For analogue readout data is multiplexed with up to 40 MHz onto one or four ports. A binary readout mode operates at up to 80 MHz output rate on two ports. Current drivers bring the serialised data off chip. The chip can accept trigger rates up to 1.1 MHz to perform a dead-timeless readout within 900 ns per trigger. For testability and calibration purposes, a charge injector with adjustable pulse height is implemented. The bias settings and various other parameters can be controlled via a standard I²C-interface. The chip is radiation hardened to an accumulated dose of more than 100 Mrad. Robustness against single event upset is achieved by redundant logic. External links Beetle - a readout chip for LHCb The Large Hadron Collider beauty experiment Application-specific integrated circuits CERN" https://en.wikipedia.org/wiki/Tiger-BASIC,"Tiger-BASIC is a high speed multitasking BASIC dialect (List of BASIC dialects) to program microcontrollers of the BASIC-Tiger family. Tiger-BASIC and the integrated development environment which goes with it, were developed by Wilke-Technology (Aachen, Germany). External links Wilke-Technology BASIC programming language Embedded systems" https://en.wikipedia.org/wiki/System%20console,"One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel. Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display. History Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first programmable computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM. Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on. On early minicomputers, the console was a seri" https://en.wikipedia.org/wiki/Syntrophy,"In biology, syntrophy, synthrophy, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the phenomenon of one species feeding on the metabolic products of another species to cope up with the energy limitations by electron transfer. In this type of biological interaction, metabolite transfer happens between two or more metabolically diverse microbial species that live in close proximity to each other. The growth of one partner depends on the nutrients, growth factors, or substrates provided by the other partner. Thus, syntrophism can be considered as an obligatory interdependency and a mutualistic metabolism between two different bacterial species. Microbial syntrophy Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium. Mechanism of microbial syntrophy The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation" https://en.wikipedia.org/wiki/Emery%27s%20rule,"In 1909, the entomologist Carlo Emery noted that social parasites among insects (e.g., kleptoparasites) tend to be parasites of species or genera to which they are closely related. Over time, this pattern has been recognized in many additional cases, and generalized to what is now known as Emery's rule. The pattern is best known for various taxa of Hymenoptera. For example, the social wasp Dolichovespula adulterina parasitizes other members of its genus such as Dolichovespula norwegica and Dolichovespula arenaria. Emery's rule is also applicable to members of other kingdoms such as fungi, red algae, and mistletoe. The significance and general relevance of this pattern are still a matter of some debate, as a great many exceptions exist, though a common explanation for the phenomenon when it occurs is that the parasites may have started as facultative parasites within the host species itself (such forms of intraspecific parasitism are well-known, even in some species of bees), but later became reproductively isolated and split off from the ancestral species, a form of sympatric speciation. When a parasitic species is a sister taxon to its host in a phylogenetic sense, the relationship is considered to be in ""strict"" adherence to Emery's rule. When the parasite is a close relative of the host but not its sister species, the relationship is in ""loose"" adherence to the rule." https://en.wikipedia.org/wiki/Living%20systems,"Living systems are open self-organizing life forms that interact with their environment. These systems are maintained by flows of information, energy and matter. Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work. Context Some scientists have proposed in the last few decades that a general theory of living systems is required to explain the nature of life. Such a general theory would arise out of the ecological and biological sciences and attempt to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment. Theories Miller's open systems James Grier Miller's living systems theory is a general theory about the existence of all living systems, their structure, interaction, behavior and development, intended to formalize the concept of life. According to Miller's 1978 book Living Systems, such a system must contain each of twenty ""critical subsystems"" defined by their functions. Miller considers living systems as a type of system. Below the level of living systems, he defines space and time, matter and energy, information and entropy, levels of organization, and physical and conceptual factors, and above living systems ecological, planetary and solar systems, galaxies, etc. Miller's central thesis is that the multiple levels of living systems (cells, organs, organisms, groups, organizations, societies, supranational systems) are open systems composed of critical and mutually-dependent subsystems that process inputs, throughputs, and outputs of energy and information. Seppänen (1998) says that Miller applied general systems theory on a broad scale to describe all aspects of living systems. Bailey states that Miller's theory is perhaps the ""most integrative"" social s" https://en.wikipedia.org/wiki/Generalized%20pencil-of-function%20method,"Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory. Method Mathematical basis A transient electromagnetic signal can be represented as: where is the observed time-domain signal, is the signal noise, is the actual signal, are the residues (), are the poles of the system, defined as , by the identities of Z-transform, are the damping factors and are the angular frequencies. The same sequence, sampled by a period of , can be written as the following: , Generalized pencil-of-function estimates the optimal and 's. Noise-free analysis For the noiseless case, two matrices, and , are produced: where is defined as the pencil parameter. and can be decomposed into the following matrices: where and are diagonal matrices with sequentially-placed and values, respectively. If , the generalized eigenvalues of the matrix pencil yield the poles of the system, which are . Then, the generalized eigenvectors can be obtained by the following identities:           where the denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse. Noise filtering If noise is present in the system, and are combined in a general data matrix, : where is the noisy data. For efficient fil" https://en.wikipedia.org/wiki/Ordinal%20notation,"In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties: the subset of natural numbers is a recursive set the induced well-ordering on the subset of natural numbers is a recursive relation There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here. Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contra" https://en.wikipedia.org/wiki/List%20of%20refractive%20indices,"Many materials have a well-characterized refractive index, but these indices often depend strongly upon the frequency of light, causing optical dispersion. Standard refractive index measurements are taken at the ""yellow doublet"" sodium D line, with a wavelength (λ) of 589 nanometers. There are also weaker dependencies on temperature, pressure/stress, etc., as well on precise material compositions (presence of dopants, etc.); for many materials and typical conditions, however, these variations are at the percent level or less. Thus, it is especially important to cite the source for an index measurement if precision is required. In general, an index of refraction is a complex number with both a real and imaginary part, where the latter indicates the strength of absorption loss at a particular wavelength—thus, the imaginary part is sometimes called the extinction coefficient . Such losses become particularly significant, for example, in metals at short (e.g. visible) wavelengths, and must be included in any description of the refractive index. List See also Sellmeier equation Corrective lens#Ophthalmic material property tables Optical properties of water and ice" https://en.wikipedia.org/wiki/Cellular%20architecture,"Cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications. Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM. Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software. See also Cellular automaton External links Cellular architecture builds next generation supercomputers ORNL, IBM, and the Blue Gene Project Energy, IBM are partners in biological supercomputing project Cell-based Architecture Parallel computing Computer architecture Classes of computers" https://en.wikipedia.org/wiki/Hardware%20reset,"A hardware reset or hard reset of a computer system is a hardware operation that re-initializes the core hardware components of the system, thus ending all current software operations in the system. This is typically, but not always, followed by booting of the system into firmware that re-initializes the rest of the system, and restarts the operating system. Hardware resets are an essential part of the power-on process, but may also be triggered without power cycling the system by direct user intervention via a physical reset button, watchdog timers, or by software intervention that, as its last action, activates the hardware reset line (e.g, in a fatal error where the computer crashes). User initiated hard resets can be used to reset the device if the software hangs, crashes, or is otherwise unresponsive. However, data may become corrupted if this occurs. Generally, a hard reset is initiated by pressing a dedicated reset button, or holding a combination of buttons on some mobile devices. Devices may not have a dedicated Reset button, but have the user hold the power button to cut power, which the user can then turn the computer back on. On some systems (e.g, the PlayStation 2 video game console), pressing and releasing the power button initiates a hard reset, and holding the button turns the system off. Hardware reset in 80x86 IBM PC The 8086 microprocessors provide RESET pin that is used to do the hardware reset. When a HIGH is applied to the pin, the CPU immediately stops, and sets the major registers to these values: The CPU uses the values of CS and IP registers to find the location of the next instruction to execute. Location of next instruction is calculated using this simple equation: Location of next instruction = (CS<<4) + (IP) This implies that after the hardware reset, the CPU will start execution at the physical address 0xFFFF0. In IBM PC compatible computers, This address maps to BIOS ROM. The memory word at 0xFFFF0 usually contains a JMP ins" https://en.wikipedia.org/wiki/Feng%27s%20classification,"Tse-yun Feng suggested the use of degree of parallelism to classify various computer architecture. It is based on sequential and parallel operations at a bit and word level. About degree of parallelism Maximum degree of parallelism The maximum number of binary digits that can be processed within a unit time by a computer system is called the maximum parallelism degree P. If a processor is processing P bits in unit time, then P is called the maximum degree of parallelism. Average degree of parallelism Let i = 1, 2, 3, ..., T be the different timing instants and P1, P2, ..., PT be the corresponding bits processed. Then, Processor utilization Processor utilization is defined as The maximum degree of parallelism depends on the structure of the arithmetic and logic unit. Higher degree of parallelism indicates a highly parallel ALU or processing element. Average parallelism depends on both the hardware and the software. Higher average parallelism can be achieved through concurrent programs. Types of classification According to Feng's classification, computer architecture can be classified into four. The classification is based on the way contents stored in memory are processed. The contents can be either data or instructions. Word serial bit serial (WSBS) Word serial bit parallel (WSBP) Word parallel bit serial (WPBS) Word parallel bit parallel (WPBP) Word serial bit serial (WSBS) One bit of one selected word is processed at a time. This represents serial processing and needs maximum processing time. Word serial bit parallel (WSBP) It is found in most existing computers and has been called ""word slice"" processing because one word of one bit is processed at a time. All bits of a selected word are processed at a time. Bit parallel means all bits of a word. Word parallel bit serial (WPBS) It has been called bit slice processing because m-bit slice is processed at a time. Word parallel signifies selection of all words. It can be considered as one bit " https://en.wikipedia.org/wiki/Peripheral%20DMA%20controller,"A peripheral DMA controller (PDC) is a feature found in modern microcontrollers. This is typically a FIFO with automated control features for driving implicitly included modules in a microcontroller such as UARTs. This takes a large burden from the operating system and reduces the number of interrupts required to service and control these type of functions. See also Direct memory access (DMA) Autonomous peripheral operation" https://en.wikipedia.org/wiki/Flexible%20electronics,"Flexible electronics, also known as flex circuits, is a technology for assembling electronic circuits by mounting electronic devices on flexible plastic substrates, such as polyimide, PEEK or transparent conductive polyester film. Additionally, flex circuits can be screen printed silver circuits on polyester. Flexible electronic assemblies may be manufactured using identical components used for rigid printed circuit boards, allowing the board to conform to a desired shape, or to flex during its use. Manufacturing Flexible printed circuits (FPC) are made with a photolithographic technology. An alternative way of making flexible foil circuits or flexible flat cables (FFCs) is laminating very thin (0.07 mm) copper strips in between two layers of PET. These PET layers, typically 0.05 mm thick, are coated with an adhesive which is thermosetting, and will be activated during the lamination process. FPCs and FFCs have several advantages in many applications: Tightly assembled electronic packages, where electrical connections are required in 3 axes, such as cameras (static application). Electrical connections where the assembly is required to flex during its normal use, such as folding cell phones (dynamic application). Electrical connections between sub-assemblies to replace wire harnesses, which are heavier and bulkier, such as in cars, rockets and satellites. Electrical connections where board thickness or space constraints are driving factors. Advantage of FPCs Potential to replace multiple rigid boards or connectors Single-sided circuits are ideal for dynamic or high-flex applications Stacked FPCs in various configurations Disadvantages of FPCs Cost increase over rigid PCBs Increased risk of damage during handling or use More difficult assembly process Repair and rework is difficult or impossible Generally worse panel utilization resulting in increased cost Applications Flex circuits are often used as connectors in various applications where flexibility" https://en.wikipedia.org/wiki/Quantitative%20biology,"Quantitative biology is an umbrella term encompassing the use of mathematical, statistical or computational techniques to study life and living organisms. The central theme and goal of quantitative biology is the creation of predictive models based on fundamental principles governing living systems. The subfields of biology that employ quantitative approaches include: Mathematical and theoretical biology Computational biology Bioinformatics Biostatistics Systems biology Population biology Synthetic biology Epidemiology" https://en.wikipedia.org/wiki/Tiller%20%28botany%29,"A tiller is a shoot that arises from the base of a grass plant. The term refers to all shoots that grow after the initial parent shoot grows from a seed. Tillers are segmented, each segment possessing its own two-part leaf. They are involved in vegetative propagation and, in some cases, also seed production. ""Tillering"" refers to the production of side shoots and is a property possessed by many species in the grass family. This enables them to produce multiple stems (tillers) starting from the initial single seedling. This ensures the formation of dense tufts and multiple seed heads. Tillering rates are heavily influenced by soil water quantity. When soil moisture is low, grasses tend to develop more sparse and deep root systems (as opposed to dense, lateral systems). Thus, in dry soils, tillering is inhibited: the lateral nature of tillering is not supported by lateral root growth. See also Crown (botany)" https://en.wikipedia.org/wiki/Blackman%27s%20theorem,"Blackman's theorem is a general procedure for calculating the change in an impedance due to feedback in a circuit. It was published by Ralph Beebe Blackman in 1943, was connected to signal-flow analysis by John Choma, and was made popular in the extra element theorem by R. D. Middlebrook and the asymptotic gain model of Solomon Rosenstark. Blackman's approach leads to the formula for the impedance Z between two selected terminals of a negative feedback amplifier as Blackman's formula: where ZD = impedance with the feedback disabled, TSC = loop transmission with a small-signal short across the selected terminal pair, and TOC = loop transmission with an open circuit across the terminal pair. The loop transmission also is referred to as the return ratio. Blackman's formula can be compared with Middlebrook's result for the input impedance Zin of a circuit based upon the extra-element theorem: where: is the impedance of the extra element; is the input impedance with removed (or made infinite); is the impedance seen by the extra element with the input shorted (or made zero); is the impedance seen by the extra element with the input open (or made infinite). Blackman's formula also can be compared with Choma's signal-flow result: where is the value of under the condition that a selected parameter P is set to zero, return ratio is evaluated with zero excitation and is for the case of short-circuited source resistance. As with the extra-element result, differences are in the perspective leading to the formula. See also Mason's gain formula Further reading" https://en.wikipedia.org/wiki/Supergolden%20ratio,"In mathematics, two quantities are in the supergolden ratio if their quotient equals the unique real solution to the equation This solution is commonly denoted The name supergolden ratio results of a analogy with the golden ratio , which is the positive root of the equation Using formulas for the cubic equation, one can show that or, using the hyperbolic cosine, The decimal expansion of this number begins as 1.465571231876768026656731... (). Properties Many properties of the supergolden ratio are closely related to golden ratio . For example, while we have for the golden ratio, the inverse square of the supergolden ratio obeys . Additionally, the supergolden ratio can be expressed in terms of itself as the infinite geometric series in comparison to the golden ratio identity The supergolden ratio is also the fourth smallest Pisot number, which means that its algebraic conjugates are both smaller than 1 in absolute value. Supergolden sequence The supergolden sequence, also known as the Narayana's cows sequence, is a sequence where the ratio between consecutive terms approaches the supergolden ratio. The first three terms are each one, and each term after that is calculated by adding the previous term and the term two places before that; that is, , with . The first values are 1, 1, 1, 2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88, 129, 189, 277, 406, 595… (). Supergolden rectangle A supergolden rectangle is a rectangle whose side lengths are in a ratio. When a square with the same side length as the shorter side of the rectangle is removed from one side of the rectangle, the sides of the resulting rectangle will be in a ratio. This rectangle can be divided into two more supergolden rectangles with opposite orientations and areas in a ratio. The larger rectangle has a diagonal of length times the short side of the original rectangle, and which is perpendicular to the diagonal of the original rectangle. In addition, if the line segment that separates the" https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20constant,"In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of continued fractions. In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely The term ""Lévy's constant"" is sometimes used to refer to (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function for and zero otherwise. This gives Lévy's constant as . The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem. See also Khinchin's constant" https://en.wikipedia.org/wiki/Network%20block%20device,"On Linux, network block device (NBD) is a network protocol that can be used to forward a block device (typically a hard disk or partition) from one machine to a second machine. As an example, a local machine can access a hard disk drive that is attached to another computer. The protocol was originally developed for Linux 2.1.55 and released in 1997. In 2011 the protocol was revised, formally documented, and is now developed as a collaborative open standard. There are several interoperable clients and servers. There are Linux-compatible NBD implementations for FreeBSD and other operating systems. The term 'network block device' is sometimes also used generically. Technically, a network block device is realized by three components: the server part, the client part, and the network between them. On the client machine, on which is the device node, a kernel driver controls the device. Whenever a program tries to access the device, the kernel driver forwards the request (if the client part is not fully implemented in the kernel it can be done with help of a userspace program) to the server machine, on which the data resides physically. On the server machine, requests from the client are handled by a userspace program. Network block device servers are typically implemented as a userspace program running on a general-purpose computer. All of the function specific to network block device servers can reside in a userspace process because the process communicates with the client via conventional sockets and accesses the storage via a conventional file system interface. The network block device client module is available on Unix-like operating systems, including Linux and Bitrig. Since the server is a userspace program, it can potentially run on every Unix-like platform; for example, NBD's server part has been ported to Solaris. Alternative protocols iSCSI: The ""target-utils"" iscsi package on many Linux distributions. NVMe-oF: an equivalent mechanism, exposing b" https://en.wikipedia.org/wiki/Modified%20Harvard%20architecture,"A modified Harvard architecture is a variation of the Harvard computer architecture that, unlike the pure Harvard architecture, allows memory that contains instructions to be accessed as data. Most modern computers that are documented as Harvard architecture are, in fact, modified Harvard architecture. Harvard architecture The original Harvard architecture computer, the Harvard Mark I, employed entirely separate memory systems to store instructions and data. The CPU fetched the next instruction and loaded or stored data simultaneously and independently. This is in contrast to a von Neumann architecture computer, in which both instructions and data are stored in the same memory system and (without the complexity of a CPU cache) must be accessed in turn. The physical separation of instruction and data memory is sometimes held to be the distinguishing feature of modern Harvard architecture computers. With microcontrollers (entire computer systems integrated onto single chips), the use of different memory technologies for instructions (e.g. flash memory) and data (typically read/write memory) in von Neumann machines is becoming popular. The true distinction of a Harvard machine is that instruction and data memory occupy different address spaces. In other words, a memory address does not uniquely identify a storage location (as it does in a von Neumann machine); it is also necessary to know the memory space (instruction or data) to which the address belongs. Von Neumann architecture A computer with a von Neumann architecture has the advantage over Harvard machines as described above in that code can also be accessed and treated the same as data, and vice versa. This allows, for example, data to be read from disk storage into memory and then executed as code, or self-optimizing software systems using technologies such as just-in-time compilation to write machine code into their own memory and then later execute it. Another example is self-modifying code, which all" https://en.wikipedia.org/wiki/BlueHat,"BlueHat (or Blue Hat or Blue-Hat) is a term used to refer to outside computer security consulting firms that are employed to bug test a system prior to its launch, looking for exploits so they can be closed. In particular, Microsoft uses the term to refer to the computer security professionals they invited to find the vulnerability of their products, such as Windows. Blue Hat Microsoft Hacker Conference The Blue Hat Microsoft Hacker Conference is an invitation-only conference created by Window Snyder that is intended to open communication between Microsoft engineers and hackers. The event has led to both mutual understanding and the occasional confrontation. Microsoft's developers were visibly uncomfortable when Metasploit was demonstrated. See also Hacker culture Hacker ethic Black hat hacker" https://en.wikipedia.org/wiki/List%20of%20Wenninger%20polyhedron%20models,"This is an indexed list of the uniform and stellated polyhedra from the book Polyhedron Models, by Magnus Wenninger. The book was written as a guide book to building polyhedra as physical models. It includes templates of face elements for construction and helpful hints in building, and also brief descriptions on the theory behind these shapes. It contains the 75 nonprismatic uniform polyhedra, as well as 44 stellated forms of the convex regular and quasiregular polyhedra. Models listed here can be cited as ""Wenninger Model Number N"", or WN for brevity. The polyhedra are grouped in 5 tables: Regular (1–5), Semiregular (6–18), regular star polyhedra (20–22,41), Stellations and compounds (19–66), and uniform star polyhedra (67–119). The four regular star polyhedra are listed twice because they belong to both the uniform polyhedra and stellation groupings. Platonic solids (regular convex polyhedra) W1 to W5 Archimedean solids (Semiregular) W6 to W18 Kepler–Poinsot polyhedra (Regular star polyhedra) W20, W21, W22 and W41 Stellations: models W19 to W66 Stellations of octahedron Stellations of dodecahedron Stellations of icosahedron Stellations of cuboctahedron Stellations of icosidodecahedron Uniform nonconvex solids W67 to W119 See also List of uniform polyhedra The fifty nine icosahedra List of polyhedral stellations" https://en.wikipedia.org/wiki/Refined%20grains,"Refined grains have been significantly modified from their natural composition, in contrast to whole grains. The modification process generally involves the mechanical removal of bran and germ, either through grinding or selective sifting. Overview A refined grain is defined as having undergone a process that removes the bran, germ and husk of the grain and leaves the endosperm, or starchy interior. Examples of refined grains include white bread, white flour, corn grits and white rice. Refined grains are milled which gives a finer texture and improved shelf life. Because the outer parts of the grain are removed and used for animal feed and non-food use, refined grains have been described as less sustainable than whole grains. After refinement of grains became prevalent in the early 20th-century, nutritional deficiencies (iron, thiamin, riboflavin and niacin) became more common in the United States. To correct this, the Congress passed the U.S. Enrichment Act of 1942 which requires that iron, niacin, thiamin and riboflavin have to be added to all refined grain products before they are sold. Folate (folic acid) was added in 1996. Refining grain includes mixing, bleaching, and brominating; additionally, folate, thiamin, riboflavin, niacin, and iron are added back in to nutritionally enrich the product. Enriched grains are refined grains that have been fortified with additional nutrients. Whole grains contain more dietary fiber than refined grains. After processing, fiber is not added back to enriched grains. Enriched grains are nutritionally comparable to whole grains but only in regard to their added nutrients. Whole grains contain higher amounts of minerals including chromium, magnesium, selenium, and zinc and vitamins such as Vitamin B6 and Vitamin E. Whole grains also provide phytochemicals which enriched grains lack. In the case of maize, the process of nixtamalization (a chemical form of refinement) yields a considerable improvement in the bioavailability of" https://en.wikipedia.org/wiki/3%20nm%20process,"In semiconductor manufacturing, the 3 nm process is the next die shrink after the 5 nanometer MOSFET (metal–oxide–semiconductor field-effect transistor) technology node. South Korean chipmaker Samsung started shipping its 3 nm gate all around (GAA) process, named 3GAA, in mid-2022. On December 29, 2022, Taiwanese chip manufacturer TSMC announced that volume production using its 3 nm semiconductor node termed N3 is under way with good yields. An enhanced 3 nm chip process called N3E may start production in 2023. American manufacturer Intel plans to start 3 nm production in 2023. Samsung's 3 nm process is based on GAAFET (gate-all-around field-effect transistor) technology, a type of multi-gate MOSFET technology, while TSMC's 3 nm process still uses FinFET (fin field-effect transistor) technology, despite TSMC developing GAAFET transistors. Specifically, Samsung plans to use its own variant of GAAFET called MBCFET (multi-bridge channel field-effect transistor). Intel's process dubbed ""Intel 3"" without the ""nm"" suffix will use a refined, enhanced and optimized version of FinFET technology compared to its previous process nodes in terms of performance gained per watt, use of EUV lithography, and power and area improvement. The term ""3 nanometer"" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a 3 nm node is expected to have a contacted gate pitch of 48 nanometers and a tightest metal pitch of 24 nanometers. However, in real world commercial practice, ""3 nm"" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption. T" https://en.wikipedia.org/wiki/Stratification%20%28mathematics%29,"Stratification has several usages in mathematics. In mathematical logic In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form is stratified if and only if there is a stratification assignment S that fulfills the following conditions: If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short . If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short . The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories. In a specific set theory In New Foundations (NF) and related set theories, a formula in the language of first-order logic with equality and membership is said to be stratified if and only if there is a function which sends each variable appearing in (considered as an item of syntax) to a natural number (this works equally well if all integers are used) in such a way that any atomic formula appearing in satisfies and any atomic formula appearing in satisfies . It turns out that it is sufficient to require that these conditions be satisfied only when both variables in an atomic formula are bound in the set abstract under consideration. A set abstract satisfying this weaker condition is said to be " https://en.wikipedia.org/wiki/Web%20container,"A web container (also known as a servlet container; and compare ""webcontainer"") is the component of a web server that interacts with Jakarta Servlets. A web container is responsible for managing the lifecycle of servlets, mapping a URL to a particular servlet and ensuring that the URL requester has the correct access-rights. A web container handles requests to servlets, Jakarta Server Pages (JSP) files, and other types of files that include server-side code. The Web container creates servlet instances, loads and unloads servlets, creates and manages request and response objects, and performs other servlet-management tasks. A web container implements the web component contract of the Jakarta EE architecture. This architecture specifies a runtime environment for additional web components, including security, concurrency, lifecycle management, transaction, deployment, and other services. List of Servlet containers The following is a list of applications which implement the Jakarta Servlet specification from Eclipse Foundation, divided depending on whether they are directly sold or not. Open source Web containers Apache Tomcat (formerly Jakarta Tomcat) is an open source web container available under the Apache Software License. Apache Tomcat 6 and above are operable as general application container (prior versions were web containers only) Apache Geronimo is a full Java EE 6 implementation by Apache Software Foundation. Enhydra, from Lutris Technologies. GlassFish from Eclipse Foundation (an application server, but includes a web container). Jaminid contains a higher abstraction than servlets. Jetty, from the Eclipse Foundation. Also supports SPDY and WebSocket protocols. Payara is another application server, derived from Glassfish. Winstone supports specification v2.5 as of 0.9, has a focus on minimal configuration and the ability to strip the container down to only what you need. Tiny Java Web Server (TJWS) 2.5 , small footprint, modular design. Virgo f" https://en.wikipedia.org/wiki/DirectPlay,"DirectPlay is part of Microsoft's DirectX API. It is a network communication library intended for computer game development, although it can be used for other purposes. DirectPlay is a high-level software interface between applications and communication services that allows games to be connected over the Internet, a modem link, or a network. It features a set of tools that allow players to find game sessions and sites to manage the flow of information between hosts and players. It provides a way for applications to communicate with each other, regardless of the underlying online service or protocol. It also resolves many connectivity issues, such as Network Address Translation (NAT). Like the rest of DirectX, DirectPlay runs in COM and is accessed through component object model (COM) interfaces. By default, DirectPlay uses multi-threaded programming techniques and requires careful thought to avoid the usual threading issues. Since DirectX version 9, this issue can be alleviated at the expense of efficiency. Networking model Under the hood, DirectPlay is built on the User Datagram Protocol (UDP) to allow it speedy communication with other DirectPlay applications. It uses TCP and UDP ports 2300 to 2400 and 47624. DirectPlay sits on layers 4 and 5 of the OSI model. On layer 4, DirectPlay can handle the following tasks if requested by the application: Message ordering, which ensures that data arrives in the same order it was sent. Message reliability, which ensures that data is guaranteed to arrive. Message flow control, which ensures that data is only sent at the rate the receiver can receive it. On layer 5, DirectPlay always handles the following tasks: Connection initiation and termination. Interfaces The primary interfaces (methods of access) for DirectPlay are: IDirectPlay8Server, which allows access to server functionality IDirectPlay8Client, which allows access to client functionality IDirectPlay8Peer, which allows access to peer-to-peer functionality Seco" https://en.wikipedia.org/wiki/Complex%20programmable%20logic%20device,"A complex programmable logic device (CPLD) is a programmable logic device with complexity between that of PALs and FPGAs, and architectural features of both. The main building block of the CPLD is a macrocell, which contains logic implementing disjunctive normal form expressions and more specialized logic operations. Features Some of the CPLD features are in common with PALs: Non-volatile configuration memory. Unlike many FPGAs, an external configuration ROM isn't required, and the CPLD can function immediately on system start-up. For many legacy CPLD devices, routing constrains most logic blocks to have input and output signals connected to external pins, reducing opportunities for internal state storage and deeply layered logic. This is usually not a factor for larger CPLDs and newer CPLD product families. Other features are in common with FPGAs: Large number of gates available. CPLDs typically have the equivalent of thousands to tens of thousands of logic gates, allowing implementation of moderately complicated data processing devices. PALs typically have a few hundred gate equivalents at most, while FPGAs typically range from tens of thousands to several million. Some provisions for logic more flexible than sum-of-product expressions, including complicated feedback paths between macro cells, and specialized logic for implementing various commonly used functions, such as integer arithmetic. The most noticeable difference between a large CPLD and a small FPGA is the presence of on-chip non-volatile memory in the CPLD, which allows CPLDs to be used for ""boot loader"" functions, before handing over control to other devices not having their own permanent program storage. A good example is where a CPLD is used to load configuration data for an FPGA from non-volatile memory. Distinctions CPLDs were an evolutionary step from even smaller devices that preceded them, PLAs (first shipped by Signetics), and PALs. These in turn were preceded by standard logic products" https://en.wikipedia.org/wiki/Signal-flow%20graph,"A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs (also called digraphs), which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications. SFGs are most commonly used to represent signal flow in a physical system and its controller(s), forming a cyber-physical system. Among their other uses are the representation of signal flow in various electronic networks and amplifiers, digital filters, state-variable filters and some other types of analog filters. In nearly all literature, a signal-flow graph is associated with a set of linear equations. History Wai-Kai Chen wrote: ""The concept of a signal-flow graph was originally worked out by Shannon [1942] in dealing with analog computers. The greatest credit for the formulation of signal-flow graphs is normally extended to Mason [1953], [1956]. He showed how to use the signal-flow graph technique to solve some difficult electronic problems in a relatively simple manner. The term signal flow graph was used because of its original application to electronic problems and the association with electronic signals and flowcharts of the systems under study."" Lorens wrote: ""Previous to Mason's work, C. E. Shannon worked out a number of the properties of what are now known as flow graphs. Unfortunately, the paper originally had a restricted classification and very few people had access to the material."" ""The rules for the evaluation of the graph determinant of a Mason Graph were first given and proven by Shannon [1942] using mathematical induction. His work remained essentially unknown even after Mason p" https://en.wikipedia.org/wiki/Quinarian%20system,"The quinarian system was a method of zoological classification which was popular in the mid 19th century, especially among British naturalists. It was largely developed by the entomologist William Sharp Macleay in 1819. The system was further promoted in the works of Nicholas Aylward Vigors, William John Swainson and Johann Jakob Kaup. Swainson's work on ornithology gave wide publicity to the idea. The system had opponents even before the publication of Charles Darwin's On the Origin of Species (1859), which paved the way for evolutionary trees. Classification approach Quinarianism gets its name from the emphasis on the number five: it proposed that all taxa are divisible into five subgroups, and if fewer than five subgroups were known, quinarians believed that a missing subgroup remained to be found. Presumably this arose as a chance observation of some accidental analogies between different groups, but it was erected into a guiding principle by the quinarians. It became increasingly elaborate, proposing that each group of five classes could be arranged in a circle, with those closer together having greater affinities. Typically they were depicted with relatively advanced groups at the top, and supposedly degenerate forms towards the bottom. Each circle could touch or overlap with adjacent circles; the equivalent overlapping of actual groups in nature was called osculation. Another aspect of the system was the identification of analogies across groups: Quinarianism was not widely popular outside the United Kingdom (some followers like William Hincks persisted in Canada); it became unfashionable by the 1840s, during which time more complex ""maps"" were made by Hugh Edwin Strickland and Alfred Russel Wallace. Strickland and others specifically rejected the use of relations of ""analogy"" in constructing natural classifications. These systems were eventually discarded in favour of principles of genuinely natural classification, namely based on evolutionary relations" https://en.wikipedia.org/wiki/Bookmark%20manager,"A bookmark manager is any software program or feature designed to store, organize, and display web bookmarks. The bookmarks feature included in each major web browser is a rudimentary bookmark manager. More capable bookmark managers are available online as web apps, mobile apps, or browser extensions, and may display bookmarks as text links or graphical tiles (often depicting icons). Social bookmarking websites are bookmark managers. Start page browser extensions, new tab page browser extensions, and some browser start pages, also have bookmark presentation and organization features, which are typically tile-based. Some more general programs, such as certain note taking apps, have bookmark management functionality built-in. See also Bookmark destinations Deep links Home pages Types of bookmark management Enterprise bookmarking Comparison of enterprise bookmarking platforms Social bookmarking List of social bookmarking websites Other weblink-based systems Search engine Comparison of search engines with social bookmarking systems Search engine results page Web directory Lists of websites" https://en.wikipedia.org/wiki/List%20of%20people%20with%20the%20most%20children,"This is a list of mothers said to have given birth to 20 or more children and men said to have fathered more than 25 children. Mothers and couples This section lists mothers who gave birth to at least 20 children. Numbers in bold and italics are likely to be legendary or inexact, some of them having been recorded before the 19th century. Due to the fact that women bear the children and therefore cannot reproduce as often as men, their records are often shared with or exceeded by their partners. {| class=""wikitable sortable"" |- ! style=""text-align:center;width:4%;""|Total children birthed ! style=""width:20%;""|Mother or couple (if known) ! style=""width:8%;""|Approximate year of last birth ! class=""unsortable""|Notes |- !69 |Valentina and Feodor Vassilyev |1765 |A Russian woman named Valentina Vassilyeva and her husband Feodor Vassilyev are alleged to hold the record for the most children a couple has produced. She gave birth to a total of 69 children – sixteen pairs of twins, seven sets of triplets and four sets of quadruplets – between 1725 and 1765, a total of 27 births. 67 of the 69 children were said to have survived infancy. Allegedly Vassilyev also had six sets of twins and two sets of triplets with a second wife, for another 18 children in eight births; he fathered a total of 87 children. The claim is disputed as records at this time were not well kept. |- !57|Mr and Ms Kirillov |1755 |The first wife of peasant Yakov Kirillov from the village of Vvedensky, Russia, gave birth to 57 children in a total of 21 births. She had four sets of quadruplets, seven sets of triplets and ten sets of twins. All of the children were alive in 1755, when Kirillov, aged 60, was presented at court. As with the Vassilyev case, the truth of these claims has not been established, and is highly improbable. |- !53|Barbara and Adam Stratzmann |1498 |It is claimed that Barbara Stratzmann (c. 1448–1503) of Bönnigheim, Germany, gave birth to 53 children (38 sons and 15 daughters) in a total" https://en.wikipedia.org/wiki/Large%20numbers,"Large numbers are numbers significantly larger than those typically used in everyday life (for instance in simple counting or in monetary transactions), appearing frequently in fields such as mathematics, cosmology, cryptography, and statistical mechanics. They are typically large positive integers, or more generally, large positive real numbers, but may also be other numbers in other contexts. Googology is the study of nomenclature and properties of large numbers. In the everyday world Scientific notation was created to handle the wide range of values that occur in scientific study. 1.0 × 109, for example, means one billion, or a 1 followed by nine zeros: 1 000 000 000. The reciprocal, 1.0 × 10−9, means one billionth, or 0.000 000 001. Writing 109 instead of nine zeros saves readers the effort and hazard of counting a long series of zeros to see how large the number is. In addition to scientific (powers of 10) notation, the following examples include (short scale) systematic nomenclature of large numbers. Examples of large numbers describing everyday real-world objects include: The number of cells in the human body (estimated at 3.72 × 1013), or 37.2 trillion The number of bits on a computer hard disk (, typically about 1013, 1–2 TB), or 10 trillion The number of neuronal connections in the human brain (estimated at 1014), or 100 trillion The Avogadro constant is the number of “elementary entities” (usually atoms or molecules) in one mole; the number of atoms in 12 grams of carbon-12 approximately , or 602.2 sextillion. The total number of DNA base pairs within the entire biomass on Earth, as a possible approximation of global biodiversity, is estimated at (5.3 ± 3.6) × 1037, or 53±36 undecillion The mass of Earth consists of about 4 × 1051, or 4 sexdecillion, nucleons The estimated number of atoms in the observable universe (1080), or 100 quinvigintillion The lower bound on the game-tree complexity of chess, also known as the “Shannon number” (estim" https://en.wikipedia.org/wiki/Multiscale%20geometric%20analysis,"Multiscale geometric analysis or geometric multiscale analysis is an emerging area of high-dimensional signal processing and data analysis. See also Wavelet Scale space Multi-scale approaches Multiresolution analysis Singular value decomposition Compressed sensing Further reading Signal processing Spatial analysis" https://en.wikipedia.org/wiki/List%20of%20self-intersecting%20polygons,"Self-intersecting polygons, crossed polygons, or self-crossing polygons are polygons some of whose edges cross each other. They contrast with simple polygons, whose edges never cross. Some types of self-intersecting polygons are: the crossed quadrilateral, with four edges the antiparallelogram, a crossed quadrilateral with alternate edges of equal length the crossed rectangle, an antiparallelogram whose edges are two opposite sides and the two diagonals of a rectangle, hence having two edges parallel Star polygons pentagram, with five edges Hexagram, with six edges heptagram, with seven edges octagram, with eight edges enneagram or nonagram, with nine edges decagram, with ten edges hendecagram, with eleven edges dodecagram, with twelve edges icositetragram, with twenty four edges 257-gram, with two hundred and fifty seven edges See also Complex polygon Geometric shapes Mathematics-related lists" https://en.wikipedia.org/wiki/Mobile%20phone,"A mobile phone (or cellphone) is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area, as opposed to a fixed-location phone (landline phone). The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network (PSTN). Modern mobile telephone services use a cellular network architecture and therefore mobile telephones are called cellphones (or ""cell phones"") in North America. In addition to telephony, digital mobile phones support a variety of other services, such as text messaging, multimedia messagIng, email, Internet access (via LTE, 5G NR or Wi-Fi), short-range wireless communications (infrared, Bluetooth), satellite access (navigation, messaging connectivity), business applications, video games and digital photography. Mobile phones offering only basic capabilities are known as feature phones; mobile phones which offer greatly advanced computing capabilities are referred to as smartphones. The first handheld mobile phone was demonstrated by Martin Cooper of Motorola in New York City on 3 April 1973, using a handset weighing c. 2 kilograms (4.4 lbs). In 1979, Nippon Telegraph and Telephone (NTT) launched the world's first cellular network in Japan. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. From 1983 to 2014, worldwide mobile phone subscriptions grew to over seven billion; enough to provide one for every person on Earth. In the first quarter of 2016, the top smartphone developers worldwide were Samsung, Apple and Huawei; smartphone sales represented 78 percent of total mobile phone sales. For feature phones (slang: ""dumbphones"") , the top-selling brands were Samsung, Nokia and Alcatel. Mobile phones are considered an important human invention as it has been one of the most widely used and sold pieces of consumer technology. The growth in " https://en.wikipedia.org/wiki/Tektronix%20hex%20format,"Tektronix hex format (TEK HEX) and Extended Tektronix hex format (EXT TEK HEX or XTEK) / Extended Tektronix Object Format are ASCII-based hexadecimal file formats, created by Tektronix, for conveying binary information for applications like programming microcontrollers, EPROMs, and other kinds of chips. Each line of a Tektronix hex file starts with a slash (/) character, whereas extended Tektronix hex files start with a percent (%) character. Tektronix hex format A line consists of four parts, excluding the initial '/' character: Address — 4 character (2 byte) field containing the address where the data is to be loaded into memory. This limits the address to a maximum value of FFFF16. Byte count — 2 character (1 byte) field containing the length of the data fields. Prefix checksum — 2 character (1 byte) field containing the checksum of the prefix. The prefix checksum is the 8-bit sum of the four-bit hexadecimal value of the six digits that make up the address and byte count. Data -- contains the data to be transferred, followed by a 2 character (1 byte) checksum. The data checksum is the 8-bit sum, modulo 256, of the 4-bit hexadecimal values of the digits that make up the data bytes. Extended Tektronix hex format A line consists of five parts, excluding the initial '%' character: Record Length — 2 character (1 byte) field that specifies the number of characters (not bytes) in the record, excluding the percent sign. Type — 1 character field, specifies whether the record is data (6) or termination (8). (6 record contains data, placed at the address specified. 8 termination record: The address field may optionally contain the address of the instruction to which control is passed ; there is no data field.) Checksum — 2 hex digits (1 byte, represents the sum of all the nibbles on the line, excluding the checksum itself. Address — 2 to N character field. The first character is how many characters are to follow for this field. The remaining characters contain" https://en.wikipedia.org/wiki/Multicast%20router%20discovery,"Multicast router discovery (MRD) provides a general mechanism for the discovery of multicast routers on an IP network. For IPv4, the mechanism is based on IGMP. For IPv6 the mechanism is based on MLD. Multicast router discovery is defined by RFC 4286. Computer networking Internet Protocol" https://en.wikipedia.org/wiki/Nigel%20Scrutton,"Nigel Shaun Scrutton (born 2 April 1964) is a British biochemist and biotechnology innovator known for his work on enzyme catalysis, biophysics and synthetic biology. He is Director of the UK Future Biomanufacturing Research Hub, Director of the Fine and Speciality Chemicals Synthetic Biology Research Centre (SYNBIOCHEM), and Co-founder, Director and Chief Scientific Officer of the 'fuels-from-biology' company C3 Biotechnologies Ltd. He is Professor of Enzymology and Biophysical Chemistry in the Department of Chemistry at the University of Manchester. He is former Director of the Manchester Institute of Biotechnology (MIB) (2010 to 2020). Early life and education Scrutton was born in Batley, West Riding of Yorkshire and was brought up in Cleckheaton where he went to Whitcliffe Mount School. Scrutton graduated from King's College London with a first class Bachelor of Science degree in Biochemistry in 1985. He was a Benefactors' Scholar at St John's College, Cambridge where he completed his doctoral research (PhD) in 1988 supervised by Richard Perham. He was a Research Fellow of St John's College, Cambridge (1989–92) and a Fellow / Director of Studies at Churchill College, Cambridge (1992–95). He was awarded a Doctor of Science (ScD) degree in 2003 by the University of Cambridge. Career and research Following his PhD, Scrutton was appointed as Lecturer (1995), then Reader (1997) and Professor (1999) at the University of Leicester before being appointed Professor at the University of Manchester in 2005. He has held successive research fellowships over 29 years from the Royal Commission for the Exhibition of 1851 (1851 Research Fellowship), St John's College, Cambridge, the Royal Society (Royal Society University Research Fellow and Royal Society Wolfson Research Merit Award), the Lister Institute of Preventive Medicine, the Biotechnology and Biological Sciences Research Council (BBSRC) and the Engineering and Physical Sciences Research Council (EPSRC). He has been V" https://en.wikipedia.org/wiki/Multidimensional%20signal%20processing,"In signal processing, multidimensional signal processing covers all signal processing done using multidimensional signals and systems. While multidimensional signal processing is a subset of signal processing, it is unique in the sense that it deals specifically with data that can only be adequately detailed using more than one dimension. In m-D digital signal processing, useful data is sampled in more than one dimension. Examples of this are image processing and multi-sensor radar detection. Both of these examples use multiple sensors to sample signals and form images based on the manipulation of these multiple signals. Processing in multi-dimension (m-D) requires more complex algorithms, compared to the 1-D case, to handle calculations such as the fast Fourier transform due to more degrees of freedom. In some cases, m-D signals and systems can be simplified into single dimension signal processing methods, if the considered systems are separable. Typically, multidimensional signal processing is directly associated with digital signal processing because its complexity warrants the use of computer modelling and computation. A multidimensional signal is similar to a single dimensional signal as far as manipulations that can be performed, such as sampling, Fourier analysis, and filtering. The actual computations of these manipulations grow with the number of dimensions. Sampling Multidimensional sampling requires different analysis than typical 1-D sampling. Single dimension sampling is executed by selecting points along a continuous line and storing the values of this data stream. In the case of multidimensional sampling, the data is selected utilizing a lattice, which is a ""pattern"" based on the sampling vectors of the m-D data set. These vectors can be single dimensional or multidimensional depending on the data and the application. Multidimensional sampling is similar to classical sampling as it must adhere to the Nyquist–Shannon sampling theorem. It is affect" https://en.wikipedia.org/wiki/List%20of%20contributors%20to%20general%20relativity,"This is a dynamic list of persons who have made major contributions to the (mainstream) development of general relativity, as acknowledged by standard texts on the subject. Some related lists are mentioned at the bottom of the page. A Peter C. Aichelburg (Aichelburg–Sexl ultraboost, generalized symmetries), Miguel Alcubierre (numerical relativity, Alcubierre drives), Richard L. Arnowitt (ADM formalism), Abhay Ashtekar (Ashtekar variables, dynamical horizons) B Robert M L Baker, Jr. (high-frequency gravitational waves), James M. Bardeen (Bardeen vacuum, black hole mechanics, gauge-invariant linear perturbations of Friedmann-Lemaître cosmologies), Barry Barish (LIGO builder, gravitational-waves observation), Robert Bartnik (existence of ADM mass for asymptotically flat vacuums, quasilocal mass), Jacob Bekenstein (black hole entropy), Vladimir A. Belinsky (BKL conjecture, inverse scattering transform solution generating methods), Peter G. Bergmann (constrained Hamiltonian dynamics), Bruno Bertotti (Bertotti–Robinson electrovacuum), Jiří Bičák (exact solutions of Einstein field equations), Heinz Billing (prototype of laser interferometric gravitational-wave detector), George David Birkhoff (Birkhoff's theorem), Hermann Bondi (gravitational radiation, Bondi radiation chart, Bondi mass–energy–momentum, LTB dust, maverick models), William B. Bonnor (Bonnor beam solution), Robert H. Boyer (Boyer–Lindquist coordinates), Vladimir Braginsky (gravitational-wave detector, quantum nondemolition (QND) measurement) Carl H. Brans (Brans–Dicke theory), Hubert Bray (Riemannian Penrose inequality), Hans Adolph Buchdahl (Buchdahl fluid, Buchdahl theorem), Claudio Bunster (BTZ black hole, Surface terms in Hamiltonian formulation), William L. Burke (Burke potential, textbook) C Bernard Carr (self-similarity hypothesis, primordial black holes), Brandon Carter (no-hair theorem, Carter constant, black-hole mechanics, variational principle for Ernst vacuums), " https://en.wikipedia.org/wiki/ScreenOS,"ScreenOS is a real-time embedded operating system for the NetScreen range of hardware firewall devices from Juniper Networks. Features Beside transport level security ScreenOS also integrates these flow management applications: IP gateway VPN management – ICSA-certified IPSec IP packet inspection (low level) for protection against TCP/IP attacks Virtualization for network segmentation Possible NSA backdoor and 2015 ""Unauthorized Code"" incident In December 2015, Juniper Networks announced that it had found unauthorized code in ScreenOS that had been there since August 2012. The two backdoors it created would allow sophisticated hackers to control the firewall of un-patched Juniper Netscreen products and decrypt network traffic. At least one of the backdoors appeared likely to have been the effort of a governmental interest. There was speculation in the security field about whether it was the NSA. Many in the security industry praised Juniper for being transparent about the breach. WIRED speculated that the lack of details that were disclosed and the intentional use of a random number generator with known security flaws could suggest that it was planted intentionally. NSA and GCHQ A 2011 leaked NSA document says that GCHQ had current exploit capability against the following ScreenOS devices: NS5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. The exploit capabilities seem consistent with the program codenamed FEEDTROUGH. Versions" https://en.wikipedia.org/wiki/Carleman%20linearization,"In mathematics, Carleman linearization (or Carleman embedding) is a technique to transform a finite-dimensional nonlinear dynamical system into an infinite-dimensional linear system. It was introduced by the Swedish mathematician Torsten Carleman in 1932. Carleman linearization is related to composition operator and has been widely used in the study of dynamical systems. It also been used in many applied fields, such as in control theory and in quantum computing. Procedure Consider the following autonomous nonlinear system: where denotes the system state vector. Also, and 's are known analytic vector functions, and is the element of an unknown disturbance to the system. At the desired nominal point, the nonlinear functions in the above system can be approximated by Taylor expansion where is the partial derivative of with respect to at and denotes the Kronecker product. Without loss of generality, we assume that is at the origin. Applying Taylor approximation to the system, we obtain where and . Consequently, the following linear system for higher orders of the original states are obtained: where , and similarly . Employing Kronecker product operator, the approximated system is presented in the following form where , and and matrices are defined in (Hashemian and Armaou 2015). See also Carleman matrix Composition operator" https://en.wikipedia.org/wiki/Water%20activity,"Water activity (aw) is the partial vapor pressure of water in a solution divided by the standard state partial vapor pressure of water. In the field of food science, the standard state is most often defined as pure water at the same temperature. Using this particular definition, pure distilled water has a water activity of exactly one. Water activity is the thermodynamic activity of water as solvent and the relative humidity of the surrounding air after equilibration. As temperature increases, aw typically increases, except in some products with crystalline salt or sugar. Water migrates from areas of high aw to areas of low aw. For example, if honey (aw ≈ 0.6) is exposed to humid air (aw ≈ 0.7), the honey absorbs water from the air. If salami (aw ≈ 0.87) is exposed to dry air (aw ≈ 0.5), the salami dries out, which could preserve it or spoil it. Lower aw substances tend to support fewer microorganisms since these get desiccated by the water migration. Formula The definition of is where is the partial water vapor pressure in equilibrium with the solution, and is the (partial) vapor pressure of pure water at the same temperature. An alternate definition can be where is the activity coefficient of water and is the mole fraction of water in the aqueous fraction. Relationship to relative humidity: The relative humidity (RH) of air in equilibrium with a sample is also called the Equilibrium Relative Humidity (ERH) and is usually given as a percentage. It is equal to water activity according to The estimated mold-free shelf life (MFSL) in days at 21 °C depends on water activity according to Uses Water activity is an important characteristic for food product design and food safety. Food product design Food designers use water activity to formulate shelf-stable food. If a product is kept below a certain water activity, then mold growth is inhibited. This results in a longer shelf life. Water activity values can also help limit moisture migration within a food " https://en.wikipedia.org/wiki/A%20New%20Kind%20of%20Science,"A New Kind of Science is a book by Stephen Wolfram, published by his company Wolfram Research under the imprint Wolfram Media in 2002. It contains an empirical and systematic study of computational systems such as cellular automata. Wolfram calls these systems simple programs and argues that the scientific philosophy and methods appropriate for the study of simple programs are relevant to other fields of science. Contents Computation and its implications The thesis of A New Kind of Science (NKS) is twofold: that the nature of computation must be explored experimentally, and that the results of these experiments have great relevance to understanding the physical world. Simple programs The basic subject of Wolfram's ""new kind of science"" is the study of simple abstract rules—essentially, elementary computer programs. In almost any class of a computational system, one very quickly finds instances of great complexity among its simplest cases (after a time series of multiple iterative loops, applying the same simple set of rules on itself, similar to a self-reinforcing cycle using a set of rules). This seems to be true regardless of the components of the system and the details of its setup. Systems explored in the book include, amongst others, cellular automata in one, two, and three dimensions; mobile automata; Turing machines in 1 and 2 dimensions; several varieties of substitution and network systems; recursive functions; nested recursive functions; combinators; tag systems; register machines; reversal-addition. For a program to qualify as simple, there are several requirements: Its operation can be completely explained by a simple graphical illustration. It can be completely explained in a few sentences of human language. It can be implemented in a computer language using just a few lines of code. The number of its possible variations is small enough so that all of them can be computed. Generally, simple programs tend to have a very simple abstract framework." https://en.wikipedia.org/wiki/Wavefront%20coding,"In optics and signal processing, wavefront coding refers to the use of a phase modulating element in conjunction with deconvolution to extend the depth of field of a digital imaging system such as a video camera. Wavefront coding falls under the broad category of computational photography as a technique to enhance the depth of field. Encoding The wavefront of a light wave passing through the camera system is modulated using optical elements that introduce a spatially varying optical path length. The modulating elements must be placed at or near the plane of the aperture stop or pupil so that the same modulation is introduced for all field angles across the field-of-view. This modulation corresponds to a change in complex argument of the pupil function of such an imaging device, and it can be engineered with different goals in mind: e.g. extending the depth of focus. Linear phase mask Wavefront coding with linear phase masks works by creating an optical transfer function that encodes distance information. Cubic phase mask Wavefront Coding with cubic phase masks works to blur the image uniformly using a cubic shaped waveplate so that the intermediate image, the optical transfer function, is out of focus by a constant amount. Digital image processing then removes the blur and introduces noise depending upon the physical characteristics of the processor. Dynamic range is sacrificed to extend the depth of field depending upon the type of filter used. It can also correct optical aberration. The mask was developed by using the ambiguity function and the stationary phase method History The technique was pioneered by radar engineer Edward Dowski and his thesis adviser Thomas Cathey at the University of Colorado in the United States in the 1990s. The University filed a patent on the invention. Cathey, Dowski and Merc Mercure founded a company to commercialize the method called CDM-Optics, and licensed the invention from the University. The company was acquired in " https://en.wikipedia.org/wiki/Taxonomic%20rank,"In biology, taxonomic rank is the relative level of a group of organisms (a taxon) in an ancestral or hereditary hierarchy. A common system of biological classification (taxonomy) consists of species, genus, family, order, class, phylum, kingdom, and domain. While older approaches to taxonomic classification were phenomenological, forming groups on the basis of similarities in appearance, organic structure and behaviour, methods based on genetic analysis have opened the road to cladistics. A given rank subsumes less general categories under it, that is, more specific descriptions of life forms. Above it, each rank is classified within more general categories of organisms and groups of organisms related to each other through inheritance of traits or features from common ancestors. The rank of any species and the description of its genus is basic; which means that to identify a particular organism, it is usually not necessary to specify ranks other than these first two. Consider a particular species, the red fox, Vulpes vulpes: the specific name or specific epithet vulpes (small v) identifies a particular species in the genus Vulpes (capital V) which comprises all the ""true"" foxes. Their close relatives are all in the family Canidae, which includes dogs, wolves, jackals, and all foxes; the next higher major rank, the order Carnivora, includes caniforms (bears, seals, weasels, skunks, raccoons and all those mentioned above), and feliforms (cats, civets, hyenas, mongooses). Carnivorans are one group of the hairy, warm-blooded, nursing members of the class Mammalia, which are classified among animals with backbones in the phylum Chordata, and with them among all animals in the kingdom Animalia. Finally, at the highest rank all of these are grouped together with all other organisms possessing cell nuclei in the domain Eukarya. The International Code of Zoological Nomenclature defines rank as: ""The level, for nomenclatural purposes, of a taxon in a taxonomic hierarchy (" https://en.wikipedia.org/wiki/Virtual%20instrumentation,"Virtual instrumentation is the use of customizable software and modular measurement hardware to create user-defined measurement systems, called virtual instruments. Traditional hardware instrumentation systems are made up of fixed hardware components, such as digital multimeters and oscilloscopes that are completely specific to their stimulus, analysis, or measurement function. Because of their hard-coded function, these systems are more limited in their versatility than virtual instrumentation systems. The primary difference between hardware instrumentation and virtual instrumentation is that software is used to replace a large amount of hardware. The software enables complex and expensive hardware to be replaced by already purchased computer hardware; e. g. analog-to-digital converter can act as a hardware complement of a virtual oscilloscope, a potentiostat enables frequency response acquisition and analysis in electrochemical impedance spectroscopy with virtual instrumentation. The concept of a synthetic instrument is a subset of the virtual instrument concept. A synthetic instrument is a kind of virtual instrument that is purely software defined. A synthetic instrument performs a specific synthesis, analysis, or measurement function on completely generic, measurement agnostic hardware. Virtual instruments can still have measurement specific hardware, and tend to emphasize modular hardware approaches that facilitate this specificity. Hardware supporting synthetic instruments is by definition not specific to the measurement, nor is it necessarily (or usually) modular. Leveraging commercially available technologies, such as the PC and the analog-to-digital converter, virtual instrumentation has grown significantly since its inception in the late 1970s. Additionally, software packages like National Instruments' LabVIEW and other graphical programming languages helped grow adoption by making it easier for non-programmers to develop systems. The newly updated " https://en.wikipedia.org/wiki/Superorganism,"A superorganism or supraorganism is a group of synergetically interacting organisms of the same species. A community of synergetically interacting organisms of different species is called a holobiont. Concept The term superorganism is used most often to describe a social unit of eusocial animals, where division of labour is highly specialised and where individuals are not able to survive by themselves for extended periods. Ants are the best-known example of such a superorganism. A superorganism can be defined as ""a collection of agents which can act in concert to produce phenomena governed by the collective"", phenomena being any activity ""the hive wants"" such as ants collecting food and avoiding predators, or bees choosing a new nest site. In challenging environments, micro organisms collaborate and evolve together to process unlikely sources of nutrients such as methane. This process called syntrophy (""eating together"") might be linked to the evolution of eukaryote cells and involved in the emergence or maintenance of life forms in challenging environments on Earth and possibly other planets. Superorganisms tend to exhibit homeostasis, power law scaling, persistent disequilibrium and emergent behaviours. The term was coined in 1789 by James Hutton, the ""father of geology"", to refer to Earth in the context of geophysiology. The Gaia hypothesis of James Lovelock, and Lynn Margulis as well as the work of Hutton, Vladimir Vernadsky and Guy Murchie, have suggested that the biosphere itself can be considered a superorganism, although this has been disputed. This view relates to systems theory and the dynamics of a complex system. The concept of a superorganism raises the question of what is to be considered an individual. Toby Tyrrell's critique of the Gaia hypothesis argues that Earth's climate system does not resemble an animal's physiological system. Planetary biospheres are not tightly regulated in the same way that animal bodies are: ""planets, unlike animals, " https://en.wikipedia.org/wiki/Notation%20in%20probability%20and%20statistics,"Probability theory and statistics have some commonly used conventions, in addition to standard mathematical notation and mathematical symbols. Probability theory Random variables are usually written in upper case roman letters: , , etc. Particular realizations of a random variable are written in corresponding lower case letters. For example, could be a sample corresponding to the random variable . A cumulative probability is formally written to differentiate the random variable from its realization. The probability is sometimes written to distinguish it from other functions and measure P so as to avoid having to define ""P is a probability"" and is short for , where is the event space and is a random variable. notation is used alternatively. or indicates the probability that events A and B both occur. The joint probability distribution of random variables X and Y is denoted as , while joint probability mass function or probability density function as and joint cumulative distribution function as . or indicates the probability of either event A or event B occurring (""or"" in this case means one or the other or both). σ-algebras are usually written with uppercase calligraphic (e.g. for the set of sets on which we define the probability P) Probability density functions (pdfs) and probability mass functions are denoted by lowercase letters, e.g. , or . Cumulative distribution functions (cdfs) are denoted by uppercase letters, e.g. , or . Survival functions or complementary cumulative distribution functions are often denoted by placing an overbar over the symbol for the cumulative:, or denoted as , In particular, the pdf of the standard normal distribution is denoted by , and its cdf by . Some common operators: : expected value of X : variance of X : covariance of " https://en.wikipedia.org/wiki/Code%20of%20the%20Quipu,"Code of the Quipu is a book on the Inca system of recording numbers and other information by means of a quipu, a system of knotted strings. It was written by mathematician Marcia Ascher and anthropologist Robert Ascher, and published as Code of the Quipu: A Study in Media, Mathematics, and Culture by the University of Michigan Press in 1981. Dover Books republished it with corrections in 1997 as Mathematics of the Incas: Code of the Quipu. The Basic Library List Committee of the Mathematical Association of America has recommended its inclusion in undergraduate mathematics libraries. Topics The book describes (necessarily by inference, as there is no written record beyond the quipu the themselves) the uses of the quipu, for instance in accounting and taxation. Although 400 quipu are known to survive, the book's study is based on a selection of 191 of them, described in a companion databook. It analyzes the mathematical principles behind the use of the quipu, including a decimal form of positional notation, the concept of zero, rational numbers, and arithmetic, and the way the spatial relations between the strings of a quipu recorded hierarchical and categorical information. It argues that beyond its use in recording numbers, the quipu acted as a method for planning for future events, and as a writing system for the Inca, and that it provides a tangible representation of ""insistence"", the thematic concerns in Inca culture for symmetry and spatial and hierarchical connections. The initial chapters of the book provide an introduction to Inca society and the physical organization of a quipu (involving the colors, size, direction, and hierarchy of its strings), and discussions of repeated themes in Inca society and of the place of the quipu and its makers in that society. Later chapters discuss the mathematical structure of the quipu and of the information it stores, with reference to similarly-structured data in modern society and exercises that ask students to constr" https://en.wikipedia.org/wiki/Postglacial%20vegetation,"Postglacial vegetation refers to plants that colonize the newly exposed substrate after a glacial retreat. The term ""postglacial"" typically refers to processes and events that occur after the departure of glacial ice or glacial climates. Climate Influence Climate change is the main force behind changes in species distribution and abundance. Repeated changes in climate throughout the Quaternary Period are thought to have had a significant impact on the current vegetation species diversity present today. Functional and phylogenetic diversity are considered to be closely related to changing climatic conditions, this indicates that trait differences are extremely important in long term responses to climate change. During the transition from the last glaciation of the Pleistocene to the Holocene period, climate warming resulted in the expansion of taller plants and larger seed bearing plants which resulted in lower proportions of vegetation regeneration. Hence, low temperatures can be strong environmental filters that prevent tall and large-seeded plants from establishing in postglacial environments. Throughout Europe vegetation dynamics within the first half of the Holocene appear to have been influenced mainly by climate and the reorganization of atmospheric circulation associated with the disappearance of the North American ice sheet. This is evident in the rapid increase of forestation and changing biomes during the postglacial period between 11500ka and 8000ka before the present. Vegetation development periods of post-glacial land forms on Ellesmere Island, Northern Canada, is assumed to have been at least ca. 20,000 years in duration. This slow progression is mostly due to climatic restrictions such as an estimated annual rainfall amount of only 64mm and a mean annual temperature of -19.7 degrees Celsius. The length in time of vegetation development observed on Ellesmere Island is evidence that post glacial vegetation development is much more restricted in the Ar" https://en.wikipedia.org/wiki/Music%20and%20mathematics,"Music theory analyzes the pitch, timing, and structure of music. It uses mathematics to study elements of music such as tempo, chord progression, form, and meter. The attempt to structure and communicate new ways of composing and hearing music has led to musical applications of set theory, abstract algebra and number theory. While music theory has no axiomatic foundation in modern mathematics, the basis of musical sound can be described mathematically (using acoustics) and exhibits ""a remarkable array of number properties"". History Though ancient Chinese, Indians, Egyptians and Mesopotamians are known to have studied the mathematical principles of sound, the Pythagoreans (in particular Philolaus and Archytas) of ancient Greece were the first researchers known to have investigated the expression of musical scales in terms of numerical ratios, particularly the ratios of small integers. Their central doctrine was that ""all nature consists of harmony arising out of numbers"". From the time of Plato, harmony was considered a fundamental branch of physics, now known as musical acoustics. Early Indian and Chinese theorists show similar approaches: all sought to show that the mathematical laws of harmonics and rhythms were fundamental not only to our understanding of the world but to human well-being. Confucius, like Pythagoras, regarded the small numbers 1,2,3,4 as the source of all perfection. Time, rhythm, and meter Without the boundaries of rhythmic structure – a fundamental equal and regular arrangement of pulse repetition, accent, phrase and duration – music would not be possible. Modern musical use of terms like meter and measure also reflects the historical importance of music, along with astronomy, in the development of counting, arithmetic and the exact measurement of time and periodicity that is fundamental to physics. The elements of musical form often build strict proportions or hypermetric structures (powers of the numbers 2 and 3). Musical form Musical" https://en.wikipedia.org/wiki/Mills%27%20constant,"In number theory, Mills' constant is defined as the smallest positive real number A such that the floor function of the double exponential function is a prime number for all positive natural numbers n. This constant is named after William Harold Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unproven, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... . Mills primes The primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins . If ai denotes the i th prime in this sequence, then ai can be calculated as the smallest prime number larger than . In order to ensure that rounding , for n = 1, 2, 3, …, produces this sequence of primes, it must be the case that . The Hoheisel–Ingham results guarantee that there exists a prime between any two sufficiently large cube numbers, which is sufficient to prove this inequality if we start from a sufficiently large first prime . The Riemann hypothesis implies that there exists a prime between any two consecutive cubes, allowing the sufficiently large condition to be removed, and allowing the sequence of Mills primes to begin at a1 = 2. For all a > , there is at least one prime between and . This upper bound is much too large to be practical, as it is infeasible to check every number below that figure. However, the value of Mills' constant can be verified by calculating the first prime in the sequence that is greater than that figure. As of April 2017, the 11th number in the sequence is the largest one that has been proved prime. It is and has 20562 digits. , the largest known Mills probable prime (under the Riemann hypothesis) is , which is 555,154 digits long. Numerical calculation By calculating the sequence of Mills primes, one can approximate Mills' constant as Caldwell and Cheng used this method to compute 6850 base 10 digits of Mills" https://en.wikipedia.org/wiki/List%20of%204000-series%20integrated%20circuits,"The following is a list of CMOS 4000-series digital logic integrated circuits. In 1968, the original 4000-series was introduced by RCA. Although more recent parts are considerably faster, the 4000 devices operate over a wide power supply range (3V to 18V recommended range for ""B"" series) and are well suited to unregulated battery powered applications and interfacing with sensitive analogue electronics, where the slower operation may be an EMC advantage. The earlier datasheets included the internal schematics of the gate architectures and a number of novel designs are able to 'mis-use' this additional information to provide semi-analog functions for timing skew and linear signal amplification. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 4000 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers, and not all devices are available from all sources or in all package sizes. Overview Non-exhaustive list of manufacturers which make or have made these kind of ICs. Current manufacturers of these ICs: Nexperia (spinoff from NXP) ON Semiconductor (acquired Motorola & Fairchild Semiconductor) Texas Instruments (acquired National Semiconductor) Former manufacturers of these ICs: Hitachi NXP (acquired Philips Semiconductors) RCA (defunct; first introduced this 4000-series family in 1968) Renesas Electronics (acquired Intersil) ST Microelectronics Toshiba Semiconductor VEB Kombinat Mikroelektronik (defunct; was active in the 1980s) Tesla Piešťany, s.p. (defunct; was active in the 1980s and 1990s) various manufacturers in the former Soviet Union (e.g. Angstrem, Mikron Group, Exiton, Splav, NZPP in Russia; Mezon in Moldavia; Integral in Byelorussia; Oktyabr in Ukraine; Billur in Azerbaijan) Logic gates Since there are numerous 4000-series parts, this section groups related combinational logic pa" https://en.wikipedia.org/wiki/Examples%20of%20Markov%20chains,"This article contains examples of Markov chains and Markov processes in action. All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chains on a measurable state space. Discrete-time Board games played with dice A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn't depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown (and hence which cards are no longer in the deck), so the next state (or hand) of the game is not independent of the past states. Random walk Markov chains A center-biased random walk Consider a random walk on the number line where, at each step, the position (call it x) may change by +1 (to the right) or −1 (to the left) with probabilities: (where c is a constant greater than 0) For example, if the constant, c, equals 1, the probabilities of a move to the left at positions x = −2,−1,0,1,2 are given by respectively. The random walk has a centering effect that weakens as c increases. Since the probabilities depend only on the current position (value of x) and not on any prior positions, this biased random walk satisfies the definition of a Markov chain. Gambling Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If represents the number of dollars you have after n tosses, with , then the sequence is a Markov p" https://en.wikipedia.org/wiki/Multidimensional%20empirical%20mode%20decomposition,"In signal processing, multidimensional empirical mode decomposition (multidimensional EMD) is an extension of the one-dimensional (1-D) EMD algorithm to a signal encompassing multiple dimensions. The Hilbert–Huang empirical mode decomposition (EMD) process decomposes a signal into intrinsic mode functions combined with the Hilbert spectral analysis, known as the Hilbert–Huang transform (HHT). The multidimensional EMD extends the 1-D EMD algorithm into multiple-dimensional signals. This decomposition can be applied to image processing, audio signal processing, and various other multidimensional signals. Motivation Multidimensional empirical mode decomposition is a popular method because of its applications in many fields, such as texture analysis, financial applications, image processing, ocean engineering, seismic research, etc. Several methods of Empirical Mode Decomposition have been used to analyze characterization of multidimensional signals. Introduction to empirical mode decomposition (EMD) The empirical mode decomposition (EMD) method can extract global structure and deal with fractal-like signals. The EMD method was developed so that data can be examined in an adaptive time–frequency–amplitude space for nonlinear and non-stationary signals. The EMD method decomposes the input signal into several intrinsic mode functions (IMF) and a residue. The given equation will be as follows: where is the multi-component signal. is the intrinsic mode function, and represents the residue corresponding to intrinsic modes. Ensemble empirical mode decomposition The ensemble mean is an approach to improving the accuracy of measurements. Data is collected by separate observations, each of which contains different noise over an ensemble of universes. To generalize this ensemble idea, noise is introduced to the single data set, , as if separate observations were indeed being made as an analogue to a physical experiment that could be repeated many times. The added w" https://en.wikipedia.org/wiki/Reference%20designator,"A reference designator unambiguously identifies the location of a component within an electrical schematic or on a printed circuit board. The reference designator usually consists of one or two letters followed by a number, e.g. R13, C1002. The number is sometimes followed by a letter, indicating that components are grouped or matched with each other, e.g. R17A, R17B. IEEE 315 contains a list of Class Designation Letters to use for electrical and electronic assemblies. For example, the letter R is a reference prefix for the resistors of an assembly, C for capacitors, K for relays. History IEEE 200-1975 or ""Standard Reference Designations for Electrical and Electronics Parts and Equipments"" is a standard that was used to define referencing naming systems for collections of electronic equipment. IEEE 200 was ratified in 1975. The IEEE renewed the standard in the 1990s, but withdrew it from active support shortly thereafter. This document also has an ANSI document number, ANSI Y32.16-1975. This standard codified information from, among other sources, a United States military standard MIL-STD-16 which dates back to at least the 1950s in American industry. To replace IEEE 200–1975, ASME, a standards body for mechanical engineers, initiated the new standard ASME Y14.44-2008. This standard, along with IEEE 315–1975, provide the electrical designer with guidance on how to properly reference and annotate everything from a single circuit board to a collection of complete enclosures. Definition ASME Y14.44-2008 and IEEE 315-1975 define how to reference and annotate components of electronic devices. It breaks down a system into units, and then any number of sub-assemblies. The unit is the highest level of demarcation in a system and is always a numeral. Subsequent demarcation are called assemblies and always have the Class Letter ""A"" as a prefix following by a sequential number starting with 1. Any number of sub-assemblies may be defined until finally reaching the co" https://en.wikipedia.org/wiki/IPSANET,"IPSANET was a packet switching network written by I. P. Sharp Associates (IPSA). Operation began in May 1976. It initially used the IBM 3705 Communications Controller and Computer Automation LSI-2 computers as nodes. An Intel 80286 based-node was added in 1987. It was called the Beta node. The original purpose was to connect low-speed dumb terminals to a central time sharing host in Toronto. It was soon modified to allow a terminal to connect to an alternate host running the SHARP APL software under license. Terminals were initially either 2741-type machines based on the 14.8 characters/s IBM Selectric typewriter or 30 character/s ASCII machines. Link speed was limited to 9600 bit/s until about 1984. Other services including 2780/3780 Bisync support, remote printing, X.25 gateway and SDLC pipe lines were added in the 1978 to 1984 era. There was no general purpose data transport facility until the introduction of Network Shared Variable Processor (NSVP) in 1984. This allowed APL programs running on different hosts to communicate via Shared Variables. The Beta node improved performance and provided new services not tied to APL. An X.25 interface was the most important of these. It allowed connection to a host which was not running SHARP APL. IPSANET allowed for the development of an early yet advanced e-mail service, 666 BOX, which also became a major product for some time, originally hosted on IPSA's system, and later sold to end users to run on their own machines. NSVP allowed these remote e-mail systems to exchange traffic. The network reached its maximum size of about 300 nodes before it was shut down in 1993. External links IPSANET Archives Computer networking Packets (information technology)" https://en.wikipedia.org/wiki/Zero-crossing%20rate,"The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive. Its value has been widely used in both speech recognition and music information retrieval, being a key feature to classify percussive sounds. ZCR is defined formally as where is a signal of length and is an indicator function. In some cases only the ""positive-going"" or ""negative-going"" crossings are counted, rather than all the crossings, since between a pair of adjacent positive zero-crossings there must be a single negative zero-crossing. For monophonic tonal signals, the zero-crossing rate can be used as a primitive pitch detection algorithm. Zero crossing rates are also used for Voice activity detection (VAD), which determines whether human speech is present in an audio segment or not. See also Zero crossing Digital signal processing" https://en.wikipedia.org/wiki/Hachimoji%20DNA,"Hachimoji DNA (from Japanese hachimoji, ""eight letters"") is a synthetic nucleic acid analog that uses four synthetic nucleotides in addition to the four present in the natural nucleic acids, DNA and RNA. This leads to four allowed base pairs: two unnatural base pairs formed by the synthetic nucleobases in addition to the two normal pairs. Hachimoji bases have been demonstrated in both DNA and RNA analogs, using deoxyribose and ribose respectively as the backbone sugar. Benefits of such a nucleic acid system may include an enhanced ability to store data, as well as insights into what may be possible in the search for extraterrestrial life. The hachimoji DNA system produced one type of catalytic RNA (ribozyme or aptamer) in vitro. Description Natural DNA is a molecule carrying the genetic instructions used in the growth, development, functioning, and reproduction of all known living organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life. DNA is a polynucleotide as it is composed of simpler monomeric units called nucleotides; when double-stranded, the two chains coil around each other to form a double helix. In natural DNA, each nucleotide is composed of one of four nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound to each other with hydrogen bonds, according to base pairing rules (A with T and C with G), to make double-stranded DNA. Hachimoji DNA is similar to natural DNA but differs in the number, and type, of nucleobases. Unn" https://en.wikipedia.org/wiki/Addition,"Addition (usually signified by the plus symbol ) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent image shows two columns of three apples and two apples each, totaling at five apples. This observation is equivalent to the mathematical expression (that is, ""3 plus 2 is equal to 5""). Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, subspaces and subgroups. Addition has several important properties. It is commutative, meaning that the order of the operands does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of is the same as counting (see Successor function). Addition of does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication. Performing addition is one of the simplest numerical tasks to do. Addition of very small numbers is accessible to toddlers; the most basic task, , can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Notation and terminology Addition is written using the plus sign ""+"" between the terms; " https://en.wikipedia.org/wiki/Glugging,"Glugging (also referred to as ""the glug-glug process"") is the physical phenomenon which occurs when a liquid is poured rapidly from a vessel with a narrow opening, such as a bottle. It is a facet of fluid dynamics. As liquid is poured from a bottle, the air pressure in the bottle is lowered, and air at higher pressure from outside the bottle is forced into the bottle, in the form of a bubble, impeding the flow of liquid. Once the bubble enters, more liquid escapes, and the process is repeated. The reciprocal action of glugging creates a rhythmic sound. The English word ""glug"" is onomatopoeic, describing this sound. Onomatopoeias in other languages include (German). Academic papers have been written about the physics of glugging, and about the impact of glugging sounds on consumers' perception of products such as wine. Research into glugging has been done using high-speed photography. Factors which affect glugging are the viscosity of the liquid, its carbonation, the size and shape of the container's neck and its opening (collectively referred to as ""bottle geometry""), the angle at which the container is held, and the ratio of air to liquid in the bottle (which means that the rate and the sound of the glugging changes as the bottle empties)." https://en.wikipedia.org/wiki/SeaSeep,"SeaSeep is a combination of 2D seismic data (a group of seismic lines acquired individually, as opposed to multiple closely space lines1), high resolution multibeam sonar which is an evolutionary advanced form of side-scan sonar, navigated piston coring (one of the more common sea floor sampling methods2), heat flow sampling (which serve a critical purpose in oil exploration and production3) and possibly gravity and magnetic data (refer to Dick Gibson's Primer on Gravity and Magnetics4). The term SeaSeep originally belonged to Black Gold Energy LLC5 and refers to a dataset that combines all of the available data into one integrated package that can be used in hydrocarbon exploration. With the acquisition of Black Gold Energy LLC by Niko Resources Ltd.6 in December 2009 the term now belongs to Niko Resources The concept of a SeaSeep dataset is the modern day offshore derivative of how many oil fields were found in the late 19th and early 20th century; by finding a large anticline structure with an associated oil seep. In the United States, many of the first commercial fields in California were found using this method including the Newhall Field discovered in 1876 and the Kern River Field discovered in 18997. Seeps have also been used to find offshore fields including the Cantarell Field in Mexico in 1976; the largest oil field in Mexico and one of the largest in the world. The field is named after a fisherman, Rudesindo Cantarell, who complained to PEMEX about his fishing nets being stained by oil seeps in the Bay of Campeche. The biological and geochemical manifestations of seepage leads to distinct bathymetrical features including positive relief mounds, pinnacles, mud volcanoes and negative relief pockmarks. These features can be detected by multibeam sonar and then sampled by navigated piston coring. Spec and proprietary multibeam seep mapping and core geochemistry by Texas A&M University's Geochemical & Environmental Research Group8 and later TDI Brooks9 " https://en.wikipedia.org/wiki/Out-of-band%20data,"In computer networking, out-of-band data is the data transferred through a stream that is independent from the main in-band data stream. An out-of-band data mechanism provides a conceptually independent channel, which allows any data sent via that mechanism to be kept separate from in-band data. The out-of-band data mechanism should be provided as an inherent characteristic of the data channel and transmission protocol, rather than requiring a separate channel and endpoints to be established. The term ""out-of-band data"" probably derives from out-of-band signaling, as used in the telecommunications industry. Example case Consider a networking application that tunnels data from a remote data source to a remote destination. The data being tunneled may consist of any bit patterns. The sending end of the tunnel may at times have conditions that it needs to notify the receiving end about. However, it cannot simply insert a message to the receiving end because that end will not be able to distinguish the message from data sent by the data source. By using an out-of-band mechanism, the sending end can send the message to the receiving end out of band. The receiving end will be notified in some fashion of the arrival of out-of-band data, and it can read the out of band data and know that this is a message intended for it from the sending end, independent of the data from the data source. Implementations It is possible to implement out-of-band data transmission using a physically separate channel, but most commonly out-of-band data is a feature provided by a transmission protocol using the same channel as normal data. A typical protocol might divide the data to be transmitted into blocks, with each block having a header word that identifies the type of data being sent, and a count of the data bytes or words to be sent in the block. The header will identify the data as being in-band or out-of-band, along with other identification and routing information. At the rece" https://en.wikipedia.org/wiki/Niven%27s%20constant,"In number theory, Niven's constant, named after Ivan Niven, is the largest exponent appearing in the prime factorization of any natural number n ""on average"". More precisely, if we define H(1) = 1 and H(n) = the largest exponent appearing in the unique prime factorization of a natural number n > 1, then Niven's constant is given by where ζ is the Riemann zeta function. In the same paper Niven also proved that where h(1) = 1, h(n) = the smallest exponent appearing in the unique prime factorization of each natural number n > 1, o is little o notation, and the constant c is given by and consequently that" https://en.wikipedia.org/wiki/Hanany%E2%80%93Witten%20transition,"In theoretical physics the Hanany–Witten transition, also called the Hanany–Witten effect, refers to any process in a superstring theory in which two p-branes cross resulting in the creation or destruction of a third p-brane. A special case of this process was first discovered by Amihay Hanany and Edward Witten in 1996. All other known cases of Hanany–Witten transitions are related to the original case via combinations of S-dualities and T-dualities. This effect can be expanded to string theory, 2 strings cross together resulting in the creation or destruction of a third string. The original effect The original Hanany–Witten transition was discovered in type IIB superstring theory in flat, 10-dimensional Minkowski space. They considered a configuration of NS5-branes, D5-branes and D3-branes which today is called a Hanany–Witten brane cartoon. They demonstrated that a subsector of the corresponding open string theory is described by a 3-dimensional Yang–Mills gauge theory. However they found that the string theory space of solutions, called the moduli space, only agreed with the known Yang-Mills moduli space if whenever an NS5-brane and a D5-brane cross, a D3-brane stretched between them is created or destroyed. They also presented various other arguments in support of their effect, such as a derivation from the worldvolume Wess–Zumino terms. This proof uses the fact that the flux from each brane renders the action of the other brane ill-defined if one does not include the D3-brane. The S-rule Furthermore, they discovered the S-rule, which states that in a supersymmetric configuration the number of D3-branes stretched between a D5-brane and an NS5-brane may only be equal to 0 or 1. Then the Hanany-Witten effect implies that after the D5-brane and the NS5-brane cross, if there was a single D3-brane stretched between them it will be destroyed, and if there was not one then one will be created. In other words, there cannot be more than one D3 brane that stre" https://en.wikipedia.org/wiki/Network%20security,"Network security consists of the policies, processes and practices adopted to prevent, detect and monitor unauthorized access, misuse, modification, or denial of a computer network and network-accessible resources. Network security involves the authorization of access to data in a network, which is controlled by the network administrator. Users choose or are assigned an ID and password or other authenticating information that allows them access to information and programs within their authority. Network security covers a variety of computer networks, both public and private, that are used in everyday jobs: conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network security is involved in organizations, enterprises, and other types of institutions. It does as its title explains: it secures the network, as well as protecting and overseeing operations being done. The most common and simple way of protecting a network resource is by assigning it a unique name and a corresponding password. Network security concept Network security starts with authentication, commonly with a username and a password. Since this requires just one detail authenticating the user name—i.e., the password—this is sometimes termed one-factor authentication. With two-factor authentication, something the user 'has' is also used (e.g., a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor authentication, something the user 'is' is also used (e.g., a fingerprint or retinal scan). Once authenticated, a firewall enforces access policies such as what services are allowed to be accessed by the network users. Though effective to prevent unauthorized access, this component may fail to check potentially harmful content such as computer worms or Trojans being transmitted over the network. Anti-virus software or an intrusion prevent" https://en.wikipedia.org/wiki/Unwired%20enterprise,"An unwired enterprise is an organization that extends and supports the use of traditional thick client enterprise applications to a variety of mobile devices and their users throughout the organization. The abiding characteristic is seamless universal mobile access to critical applications and business data. Use By supporting mobile clients alongside more traditional desktop and laptop clients, an unwired enterprise attempts to increase productivity rates and speed the pace of many common business processes through anytime/anywhere accessibility. Furthermore, it is believed that supporting mobile access to enterprise applications can help facilitate cogent decision making by pulling business data in real time from server systems and making it available to the mobile workforce at the decision point. Even though the wireless network is quite ubiquitous, this type of client application requires built-in procedures to deal with any network unavailability seamlessly, without interfering with application core functionality. Pervasive broadband, simplified wireless integration and a common management system are technology trends driving more organizations toward an unwired enterprise due to lowering complexity and greater ease of use. Unwired enterprises may include office environments in which workers are untethered from traditional desktop clients and conduct all business and communication from a wide variety of wireless devices. In the unwired enterprise, client platform and operating system are deemphasized as focus shifts away from platform homogeneity to fluid and expedient data exchange and technology agnosticism. Open standards industry initiatives such as the Open Handset Alliance are designed to help mobile technology vendors deliver on this promise." https://en.wikipedia.org/wiki/Shadow%20square,"The shadow square, also known as an altitude scale, was an instrument used to determine the linear height of an object, in conjunction with the alidade, for angular observations. An early example was described in an Arabic treatise likely dating to 9th or 10th-century Baghdad. Shadow squares are often found on the backs of astrolabes. Uses The main use of a shadow square is to measure the linear height of an object using its shadow. It does so by simulating the ratio between an object, generally a gnomon, and its shadow. If the sun's ray is between 0 degrees and 45 degrees the umbra versa (Vertical axis) is used, between 45 degrees and 90 degrees the umbra recta (Horizontal axis) is used and when the sun's ray is at 45 degrees its shadow falls exactly on the umbra media (y=x) It was used during the time of medieval astronomy to determine the height of, and to track the movement of celestial bodies such as the sun when more advanced measurement methods were not available. These methods can still be used today to determine the altitude, with reference to the horizon, of any visible celestial body. Gnomon A gnomon is used along with a shadow box commonly. A gnomon is a stick placed vertically in a sunny place so that it casts a shadow that can be measured. By studying the shadow of the gnomon you can learn a lot of information about the motion of the sun. Gnomons were most likely independently discovered by many ancient civilizations, but it is known that they were used in the 5th century BC in Greece. Most likely for the measurement of the winter and summer solstices. ""Herodotus says in his Histories written around 450 B.C., that the Greeks learned the use of the gnomon from the Babylonians. Examples If your shadow is 4 feet long in your own feet, then what is the altitude of the sun? This problem can be solved through the use of the shadow box. The shadow box is divided in half, one half is calibrated by sixes the other by tens. Because it is a shadow cast by the" https://en.wikipedia.org/wiki/DAvE%20%28Infineon%29,"DAVE (Infineon) Digital Application Virtual Engineer (DAVE) is a C/C++-language software development and code generation tool for microcontroller applications. DAVE is a standalone system with automatic code generation modules. It is suited for the development of software drivers for Infineon microcontrollers and aids the developer with automatically created C-level templates and user-desired functionalities. The latest releases of DAVE include all required parts to develop code, compile and debug on the target for free (based on the ARM GCC tool suite). Together with several low-cost development boards, one can get involved in microcontroller design very easily. This makes Infineon microcontroller products also more useful to small companies and to home-use or DIY projects, similar to the established products of Atmel (AVR, SAM) and Microchip (PIC, PIC32) to name a few. DAVE was developed by Infineon Technologies. Therefore, the automatic code generator supports only Infineon microcontrollers. The user also has to get used to the concept of the Eclipse IDE. The generated code can be also used on other (often non-free) development environments from Keil, Tasking, and so on. Latest version 4 (beta) for ARM-based 32-bit Infineon processors The successor of the Eclipse-based development environment for C/C++ and/or GUI-based development using ""Apps"". It generates code for the latest XMC1xxx and XMC4xxx microcontrollers using Cortex-M processors. The code generation part is significantly improved. Besides the free DAVE development software, a DAVE SDK is a free development environment to set up its own ""Apps"" for DAVE. Details (downloads, getting started, tutorials, etc.) can be found on the website. After starting DAVE, an Eclipse environment appears. In the project browser, a standard C/C++ or a DAVE project can be set up by selecting one of the available processors of Infineon. The latter project setup allows the configuration of the selected MCU using a GUI-bas" https://en.wikipedia.org/wiki/List%20of%20tessellations," See also Uniform tiling Convex uniform honeycombs List of k-uniform tilings List of Euclidean uniform tilings Uniform tilings in hyperbolic plane Mathematics-related lists" https://en.wikipedia.org/wiki/Triple%20correlation,"The triple correlation of an ordinary function on the real line is the integral of the product of that function with two independently shifted copies of itself: The Fourier transform of triple correlation is the bispectrum. The triple correlation extends the concept of autocorrelation, which correlates a function with a single shifted copy of itself and thereby enhances its latent periodicities. History The theory of the triple correlation was first investigated by statisticians examining the cumulant structure of non-Gaussian random processes. It was also independently studied by physicists as a tool for spectroscopy of laser beams. Hideya Gamo in 1963 described an apparatus for measuring the triple correlation of a laser beam, and also showed how phase information can be recovered from the real part of the bispectrum—up to sign reversal and linear offset. However, Gamo's method implicitly requires the Fourier transform to never be zero at any frequency. This requirement was relaxed, and the class of functions which are known to be uniquely identified by their triple (and higher-order) correlations was considerably expanded, by the study of Yellott and Iverson (1992). Yellott & Iverson also pointed out the connection between triple correlations and the visual texture discrimination theory proposed by Bela Julesz. Applications Triple correlation methods are frequently used in signal processing for treating signals that are corrupted by additive white Gaussian noise; in particular, triple correlation techniques are suitable when multiple observations of the signal are available and the signal may be translating in between the observations, e.g.,a sequence of images of an object translating on a noisy background. What makes the triple correlation particularly useful for such tasks are three properties: (1) it is invariant under translation of the underlying signal; (2) it is unbiased in additive Gaussian noise; and (3) it retains nearly all of the relevant " https://en.wikipedia.org/wiki/List%20of%20partition%20topics,"Generally, a partition is a division of a whole into non-overlapping parts. Among the kinds of partitions considered in mathematics are partition of a set or an ordered partition of a set, partition of a graph, partition of an integer, partition of an interval, partition of unity, partition of a matrix; see block matrix, and partition of the sum of squares in statistics problems, especially in the analysis of variance, quotition and partition, two ways of viewing the operation of division of integers. Integer partitions Composition (number theory) Ewens's sampling formula Ferrers graph Glaisher's theorem Landau's function Partition function (number theory) Pentagonal number theorem Plane partition Quotition and partition Rank of a partition Crank of a partition Solid partition Young tableau Young's lattice Set partitions Bell number Bell polynomials Dobinski's formula Cumulant Data clustering Equivalence relation Exact cover Knuth's Algorithm X Dancing Links Exponential formula Faà di Bruno's formula Feshbach–Fano partitioning Foliation Frequency partition Graph partition Kernel of a function Lamination (topology) Matroid partitioning Multipartition Multiplicative partition Noncrossing partition Ordered partition of a set Partition calculus Partition function (quantum field theory) Partition function (statistical mechanics) Derivation of the partition function Partition of an interval Partition of a set Ordered partition Partition refinement Disjoint-set data structure Partition problem 3-partition problem Partition topology Quotition and partition Recursive partitioning Stirling number Stirling transform Stratification (mathematics) Tverberg partition Twelvefold way In probability and stochastic processes Chinese restaurant process Dobinski's formula Ewens's sampling formula Law of tota" https://en.wikipedia.org/wiki/Chronobiology,"Chronobiology is a field of biology that examines timing processes, including periodic (cyclic) phenomena in living organisms, such as their adaptation to solar- and lunar-related rhythms. These cycles are known as biological rhythms. Chronobiology comes from the ancient Greek χρόνος (chrónos, meaning ""time""), and biology, which pertains to the study, or science, of life. The related terms chronomics and chronome have been used in some cases to describe either the molecular mechanisms involved in chronobiological phenomena or the more quantitative aspects of chronobiology, particularly where comparison of cycles between organisms is required. Chronobiological studies include but are not limited to comparative anatomy, physiology, genetics, molecular biology and behavior of organisms related to their biological rhythms. Other aspects include epigenetics, development, reproduction, ecology and evolution. The subject Chronobiology studies variations of the timing and duration of biological activity in living organisms which occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning ""around"" and dies, ""day"", meaning ""approximately a day."" It is regulated by circadian clocks. The circadian rhythm can further be broken down into routine cycles during the 24-hour day: Diurnal, which describes organisms active during daytime Nocturnal, which describes organisms active in the night Crepuscular, which describes animals primarily ac" https://en.wikipedia.org/wiki/Jetronic,"Jetronic is a trade name of a manifold injection technology for automotive petrol engines, developed and marketed by Robert Bosch GmbH from the 1960s onwards. Bosch licensed the concept to many automobile manufacturers. There are several variations of the technology offering technological development and refinement. D-Jetronic (1967–1979) Analogue fuel injection, 'D' is from meaning pressure. Inlet manifold vacuum is measured using a pressure sensor located in, or connected to the intake manifold, in order to calculate the duration of fuel injection pulses. Originally, this system was called Jetronic, but the name D-Jetronic was later created as a retronym to distinguish it from subsequent Jetronic iterations. D-Jetronic was essentially a further refinement of the Electrojector fuel delivery system developed by the Bendix Corporation in the late 1950s. Rather than choosing to eradicate the various reliability issues with the Electrojector system, Bendix instead licensed the design to Bosch. With the role of the Bendix system being largely forgotten D-Jetronic became known as the first widely successful precursor of modern electronic common rail systems; it had constant pressure fuel delivery to the injectors and pulsed injections, albeit grouped (2 groups of injectors pulsed together) rather than sequential (individual injector pulses) as on later systems. As in the Electrojector system, D-Jetronic used analogue circuitry, with no microprocessor nor digital logic, the ECU used about 25 transistors to perform all of the processing. Two important factors that led to the ultimate failure of the Electrojector system: the use of paper-wrapped capacitors unsuited to heat-cycling and amplitude modulation (tv/ham radio) signals to control the injectors were superseded. The still present lack of processing power and the unavailability of solid-state sensors meant that the vacuum sensor was a rather expensive precision instrument, rather like a barometer, with brass bello" https://en.wikipedia.org/wiki/Straightedge,"A straightedge or straight edge is a tool used for drawing straight lines, or checking their straightness. If it has equally spaced markings along its length, it is usually called a ruler. Straightedges are used in the automotive service and machining industry to check the flatness of machined mating surfaces. They are also used in the decorating industry for cutting and hanging wallpaper. True straightness can in some cases be checked by using a laser line level as an optical straightedge: it can illuminate an accurately straight line on a flat surface such as the edge of a plank or shelf. A pair of straightedges called winding sticks are used in woodworking to make warping easier to perceive in pieces of wood. Three straight edges can be used to test and calibrate themselves to a certain extent, however this procedure does not control twist. For accurate calibration of a straight edge, a surface plate must be used. Compass-and-straightedge construction An idealized straightedge is used in compass-and-straightedge constructions in plane geometry. It may be used: Given two points, to draw the line connecting them Given a point and a circle, to draw either tangent Given two circles, to draw any of their common tangents Or any of the other numerous geometric constructions The idealized straightedge is: Infinitely long Infinitesimally thin (i.e. point width) Always assumed to be without graduations or marks, or the ability to mark Able to be aligned to two points with infinite precision to draw a line through them It may not be marked or used together with the compass so as to transfer the length of one segment to another. It is possible to do all compass and straightedge constructions without the straightedge. That is, it is possible, using only a compass, to find the intersection of two lines given two points on each, and to find the tangent points to circles. It is not, however, possible to do all constructions using only a straightedge. It is pos" https://en.wikipedia.org/wiki/Isothermal%20microcalorimetry,"Isothermal microcalorimetry (IMC) is a laboratory method for real-time monitoring and dynamic analysis of chemical, physical and biological processes. Over a period of hours or days, IMC determines the onset, rate, extent and energetics of such processes for specimens in small ampoules (e.g. 3–20 ml) at a constant set temperature (c. 15 °C–150 °C). IMC accomplishes this dynamic analysis by measuring and recording vs. elapsed time the net rate of heat flow (μJ/s = μW) to or from the specimen ampoule, and the cumulative amount of heat (J) consumed or produced. IMC is a powerful and versatile analytical tool for four closely related reasons: All chemical and physical processes are either exothermic or endothermic—produce or consume heat. The rate of heat flow is proportional to the rate of the process taking place. IMC is sensitive enough to detect and follow either slow processes (reactions proceeding at a few % per year) in a few grams of material, or processes which generate minuscule amounts of heat (e.g. metabolism of a few thousand living cells). IMC instruments generally have a huge dynamic range—heat flows as low as ca. 1 μW and as high as ca. 50,000 μW can be measured by the same instrument. The IMC method of studying rates of processes is thus broadly applicable, provides real-time continuous data, and is sensitive. The measurement is simple to make, takes place unattended and is non-interfering (e.g. no fluorescent or radioactive markers are needed). However, there are two main caveats that must be heeded in use of IMC: Missed data: If externally prepared specimen ampoules are used, it takes ca. 40 minutes to slowly introduce an ampoule into the instrument without significant disturbance of the set temperature in the measurement module. Thus any processes taking place during this time are not monitored. Extraneous data: IMC records the aggregate net heat flow produced or consumed by all processes taking place within an ampoule. Therefore, in order " https://en.wikipedia.org/wiki/SolidRun,"SolidRun is an Israeli company producing Embedded systems components, mainly mini computers, Single-board computers and computer-on-module devices. It is specially known for the CuBox family of mini-computers, and for producing motherboards and processing components such as the HummingBoard motherboard. Situated in Acre, Israel, SolidRun develops and manufactures products aimed both for the private entertainment sector, and for companies developing processor based products, notably components of ""Internet of Things"" technology systems. Within the scope of the IoT technology, SolidRun's mini computers are aimed to cover the intermediate sphere, between sensors and user devices, and between the larger network or Cloud framework. Within such a network, mini computers or system-on-module devices, act as mediators gathering and processing information from sensors or user devices and communicating with the network - this is also known as Edge computing. History SolidRun was founded in 2010 by co-founders Rabeeh Khoury (formally an engineer at Marvell Technology Group) and Kossay Omary. The goal of SolidRun has been to develop, produce and market components aimed for integration with IoT systems. The company today is situated in Acre in the Northern District of Israel, and headed by Dr. Atai Ziv (CEO). The major product development line aimed at the consumer market is the CuBox family of mini-computers. The first of which was announced in December 2011, followed by the development of the CuBox-i series, announced in November 2013. The most recent addition to the CuBox line has been the CuBoxTV (announced in December 2014), which has been marketed primarily for the home entertainment market. A further primary product developed by SolidRun is the Hummingboard, an uncased single-board computer, marketed to developers as an integrated processing component. SolidRun develops all of its products using Open-source software (such as Linux and OpenELEC), identifying itself a" https://en.wikipedia.org/wiki/Cryptomorphism,"In mathematics, two objects, especially systems of axioms or semantics for them, are called cryptomorphic if they are equivalent but not obviously equivalent. In particular, two definitions or axiomatizations of the same object are ""cryptomorphic"" if it is not obvious that they define the same object. Examples of cryptomorphic definitions abound in matroid theory and others can be found elsewhere, e.g., in group theory the definition of a group by a single operation of division, which is not obviously equivalent to the usual three ""operations"" of identity element, inverse, and multiplication. This word is a play on the many morphisms in mathematics, but ""cryptomorphism"" is only very distantly related to ""isomorphism"", ""homomorphism"", or ""morphisms"". The equivalence may in a cryptomorphism, if it is not actual identity, be informal, or may be formalized in terms of a bijection or equivalence of categories between the mathematical objects defined by the two cryptomorphic axiom systems. Etymology The word was coined by Garrett Birkhoff before 1967, for use in the third edition of his book Lattice Theory. Birkhoff did not give it a formal definition, though others working in the field have made some attempts since. Use in matroid theory Its informal sense was popularized (and greatly expanded in scope) by Gian-Carlo Rota in the context of matroid theory: there are dozens of equivalent axiomatic approaches to matroids, but two different systems of axioms often look very different. In his 1997 book Indiscrete Thoughts, Rota describes the situation as follows: Though there are many cryptomorphic concepts in mathematics outside of matroid theory and universal algebra, the word has not caught on among mathematicians generally. It is, however, in fairly wide use among researchers in matroid theory. See also Combinatorial class, an equivalence among combinatorial enumeration problems hinting at the existence of a cryptomorphism" https://en.wikipedia.org/wiki/Magic%20angle,"The magic angle is a precisely defined angle, the value of which is approximately 54.7356°. The magic angle is a root of a second-order Legendre polynomial, , and so any interaction which depends on this second-order Legendre polynomial vanishes at the magic angle. This property makes the magic angle of particular importance in magic angle spinning solid-state NMR spectroscopy. In magnetic resonance imaging, structures with ordered collagen, such as tendons and ligaments, oriented at the magic angle may appear hyperintense in some sequences; this is called the magic angle artifact or effect. Mathematical definition The magic angle θm is where arccos and arctan are the inverse cosine and tangent functions respectively. θm is the angle between the space diagonal of a cube and any of its three connecting edges, see image. Another representation of the magic angle is half of the opening angle formed when a cube is rotated from its space diagonal axis, which may be represented as arccos − or 2 arctan  radians ≈ 109.4712°. This double magic angle is directly related to tetrahedral molecular geometry and is the angle between two vertices and the exact center of a tetrahedron (i.e., the edge central angle also known as the tetrahedral angle). Magic angle and nuclear magnetic resonance In nuclear magnetic resonance (NMR) spectroscopy, three prominent nuclear magnetic interactions, dipolar coupling, chemical shift anisotropy (CSA), and first-order quadrupolar coupling, depend on the orientation of the interaction tensor with the external magnetic field. By spinning the sample around a given axis, their average angular dependence becomes: where θ is the angle between the principal axis of the interaction and the magnetic field, θr is the angle of the axis of rotation relative to the magnetic field and β is the (arbitrary) angle between the axis of rotation and principal axis of the interaction. For dipolar couplings, the principal axis corresponds to the internucl" https://en.wikipedia.org/wiki/Colony%20picker,"A colony picker is an instrument used to automatically identify microbial colonies growing on a solid medium, pick them and duplicate them either onto solid or liquid media. It is used in research laboratories as well as in industrial environments such as food testing and in microbiological cultures. Uses In food safety and in clinical diagnosis colony picking is used to isolate individual colonies for identification. Colony pickers automate this procedure, saving costs and personnel and reducing human error. In the drug discovery process they are used for screening purposes by picking thousands of microbial colonies and transferring them for further testing. Other uses include cloning procedures and DNA sequencing. as add-on Colony pickers are sold either as stand-alone instruments or as add-ons to liquid handling robots, using the robot as the actuator and adding a camera and image analysis capabilities. This strategy lowers the price of the system considerably and adds reusability as the robot can still be used for other purposes." https://en.wikipedia.org/wiki/C-slowing,"C-slow retiming is a technique used in conjunction with retiming to improve throughput of a digital circuit. Each register in a circuit is replaced by a set of C registers (in series). This creates a circuit with C independent threads, as if the new circuit contained C copies of the original circuit. A single computation of the original circuit takes C times as many clock cycles to compute in the new circuit. C-slowing by itself increases latency, but throughput remains the same. Increasing the number of registers allows optimization of the circuit through retiming to reduce the clock period of the circuit. In the best case, the clock period can be reduced by a factor of C. Reducing the clock period of the circuit reduces latency and increases throughput. Thus, for computations that can be multi-threaded, combining C-slowing with retiming can increase the throughput of the circuit, with little, or in the best case, no increase in latency. Since registers are relatively plentiful in FPGAs, this technique is typically applied to circuits implemented with FPGAs. See also Pipelining Barrel processor Resources PipeRoute: A Pipelining-Aware Router for Reconfigurable Architectures Simple Symmetric Multithreading in Xilinx FPGAs Post Placement C-Slow Retiming for Xilinx Virtex (.ppt) Post Placement C-Slow Retiming for Xilinx Virtex (.pdf) Exploration of RaPiD-style Pipelined FPGA Interconnects Time and Area Efficient Pattern Matching on FPGAs Gate arrays" https://en.wikipedia.org/wiki/List%20of%20long%20mathematical%20proofs,"This is a list of unusually long mathematical proofs. Such proofs often use computational proof methods and may be considered non-surveyable. , the longest mathematical proof, measured by number of published journal pages, is the classification of finite simple groups with well over 10000 pages. There are several proofs that would be far longer than this if the details of the computer calculations they depend on were published in full. Long proofs The length of unusually long proofs has increased with time. As a rough rule of thumb, 100 pages in 1900, or 200 pages in 1950, or 500 pages in 2000 is unusually long for a proof. 1799 The Abel–Ruffini theorem was nearly proved by Paolo Ruffini, but his proof, spanning 500 pages, was mostly ignored and later, in 1824, Niels Henrik Abel published a proof that required just six pages. 1890 Killing's classification of simple complex Lie algebras, including his discovery of the exceptional Lie algebras, took 180 pages in 4 papers. 1894 The ruler-and-compass construction of a polygon of 65537 sides by Johann Gustav Hermes took over 200 pages. 1905 Emanuel Lasker's original proof of the Lasker–Noether theorem took 98 pages, but has since been simplified: modern proofs are less than a page long. 1963 Odd order theorem by Feit and Thompson was 255 pages long, which at the time was over 10 times as long as what had previously been considered a long paper in group theory. 1964 Resolution of singularities. Hironaka's original proof was 216 pages long; it has since been simplified considerably down to about 10 or 20 pages. 1966 Abyhankar's proof of resolution of singularities for 3-folds in characteristic greater than 6 covered about 500 pages in several papers. In 2009, Cutkosky simplified this to about 40 pages. 1966 Discrete series representations of Lie groups. Harish-Chandra's construction of these involved a long series of papers totaling around 500 pages. His later work on the Plancherel theorem for semisimple groups added a" https://en.wikipedia.org/wiki/Radio-frequency%20engineering,"Radio-frequency (RF) engineering is a subset of electronic engineering involving the application of transmission line, waveguide, antenna and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz. It is incorporated into almost everything that transmits or receives a radio wave, which includes, but is not limited to, mobile phones, radios, WiFi, and two-way radios. RF engineering is a highly specialized field that typically includes the following areas of expertise: Design of antenna systems to provide radiative coverage of a specified geographical area by an electromagnetic field or to provide specified sensitivity to an electromagnetic field impinging on the antenna. Design of coupling and transmission line structures to transport RF energy without radiation. Application of circuit elements and transmission line structures in the design of oscillators, amplifiers, mixers, detectors, combiners, filters, impedance transforming networks and other devices. Verification and measurement of performance of radio frequency devices and systems. To produce quality results, the RF engineer needs to have an in-depth knowledge of mathematics, physics and general electronics theory as well as specialized training in areas such as wave propagation, impedance transformations, filters and microstrip printed circuit board design. Radio electronics Radio electronics is concerned with electronic circuits which receive or transmit radio signals. Typically, such circuits must operate at radio frequency and power levels, which imposes special constraints on their design. These constraints increase in their importance with higher frequencies. At microwave frequencies, the reactance of signal traces becomes a crucial part of the physical layout of the circuit. List of radio electronics topics: RF oscillators: Phase-locked loop, voltage-controlled oscillator Tr" https://en.wikipedia.org/wiki/Multiplication%20theorem,"In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises. Finite characteristic The multiplication theorem takes two common forms. In the first case, a finite number of terms are added or multiplied to give the relation. In the second case, an infinite number of terms are added or multiplied. The finite form typically occurs only for the gamma and related functions, for which the identity follows from a p-adic relation over a finite field. For example, the multiplication theorem for the gamma function follows from the Chowla–Selberg formula, which follows from the theory of complex multiplication. The infinite sums are much more common, and follow from characteristic zero relations on the hypergeometric series. The following tabulates the various appearances of the multiplication theorem for finite characteristic; the characteristic zero relations are given further down. In all cases, n and k are non-negative integers. For the special case of n = 2, the theorem is commonly referred to as the duplication formula. Gamma function–Legendre formula The duplication formula and the multiplication theorem for the gamma function are the prototypical examples. The duplication formula for the gamma function is It is also called the Legendre duplication formula or Legendre relation, in honor of Adrien-Marie Legendre. The multiplication theorem is for integer k ≥ 1, and is sometimes called Gauss's multiplication formula, in honour of Carl Friedrich Gauss. The multiplication theorem for the gamma functions can be understood to be a special case, for the trivial Dirichlet charac" https://en.wikipedia.org/wiki/Integrated%20stress%20response,"The integrated stress response is a cellular stress response conserved in eukaryotic cells that downregulates protein synthesis and upregulates specific genes in response to internal or environmental stresses. Background The integrated stress response can be triggered within a cell due to either extrinsic or intrinsic conditions. Extrinsic factors include hypoxia, amino acid deprivation, glucose deprivation, viral infection and presence of oxidants. The main intrinsic factor is endoplasmic reticulum stress due to the accumulation of unfolded proteins. It has also been observed that the integrated stress response may trigger due to oncogene activation. The integrated stress response will either cause the expression of genes that fix the damage in the cell due to the stressful conditions, or it will cause a cascade of events leading to apoptosis, which occurs when the cell cannot be brought back into homeostasis. eIF2 protein complex Stress signals can cause protein kinases, known as EIF-2 kinases, to phosphorylate the α subunit of a protein complex called translation initiation factor 2 (eIF2), resulting in the gene ATF4 being turned on, which will further affect gene expression. eIF2 consists of three subunits: eIF2α, eIF2β and eIF2γ. eIF2α contains two binding sites, one for phosphorylation and one for RNA binding. The kinases work to phosphorylate serine 51 on the α subunit, which is a reversible action. In a cell experiencing normal conditions, eIF2 aids in the initiation of mRNA translation and recognizing the AUG start codon. However, once eIF2α is phosphorylated, the complex’s activity reduces, causing reduction in translation initiation and protein synthesis, while promoting expression of the ATF4 gene. Protein kinases There are four known mammalian protein kinases that phosphorylate eIF2α, including PKR-like ER kinase (PERK, EIF2AK3), heme-regulated eIF2α kinase (HRI, EIF2AK1), general control non-depressible 2 (GCN2, EIF2AK4) and double stranded RNA" https://en.wikipedia.org/wiki/CHMOS,"CHMOS refers to one of a series of Intel CMOS processes developed from their HMOS process. CHMOS stands for ""complementary high-performance metal-oxide-silicon. It was first developed in 1981. CHMOS was used in the Intel 80C51BH, a new version of their standard MCS-51 microcontroller. The chip was also used in later versions of Intel 8086, and the 80C88, which were fully static version of the Intel 8088. The Intel 80386 was made in 1.5 µm CHMOS III, and later in 1.0 µm CHMOS IV. CHMOS III used 1.5 micron lithography, p-well processing, n-well processing, and two layers of metal. CHMOS III-E used for the 12.5 MHz Intel 80C186 microprocessor. This technology uses 1 µm process for the EPROM. CHMOS IV (H stands for High Speed) used 1.0 µm lithography. Many versions of the Intel 80486 were made in 1.0 µm CHMOS IV. Intel uses this technology on these 80C186EB and 80C188EB embedded processors. CHMOS V used 0.8 µm lithography and 3 metal layers, and was used in later versions of the 80386, 80486, and i860. See also Depletion-load NMOS logic#Further development" https://en.wikipedia.org/wiki/Josiah%20Willard%20Gibbs%20Lectureship,"The Josiah Willard Gibbs Lectureship (also called the Gibbs Lecture) of the American Mathematical Society is an annually awarded mathematical prize, named in honor of Josiah Willard Gibbs. The prize is intended not only for mathematicians, but also for physicists, chemists, biologists, physicians, and other scientists who have made important applications of mathematics. The purpose of the prize is to recognize outstanding achievement in applied mathematics and ""to enable the public and the academic community to become aware of the contribution that mathematics is making to present-day thinking and to modern civilization."" The prize winner gives a lecture, which is subsequently published in the Bulletin of the American Mathematical Society. Prize winners See also Colloquium Lectures (AMS) List of mathematics awards" https://en.wikipedia.org/wiki/Glossary%20of%20electrical%20and%20electronics%20engineering,"This glossary of electrical and electronics engineering is a list of definitions of terms and concepts related specifically to electrical engineering and electronics engineering. For terms related to engineering in general, see Glossary of engineering. A B C D E F G H I J K L M N O P Q R S T U V W X Y Z See also Glossary of engineering Glossary of civil engineering Glossary of mechanical engineering Glossary of structural engineering" https://en.wikipedia.org/wiki/Uniform%20tilings%20in%20hyperbolic%20plane,"In hyperbolic geometry, a uniform hyperbolic tiling (or regular, quasiregular or semiregular hyperbolic tiling) is an edge-to-edge filling of the hyperbolic plane which has regular polygons as faces and is vertex-transitive (transitive on its vertices, isogonal, i.e. there is an isometry mapping any vertex onto any other). It follows that all vertices are congruent, and the tiling has a high degree of rotational and translational symmetry. Uniform tilings can be identified by their vertex configuration, a sequence of numbers representing the number of sides of the polygons around each vertex. For example, 7.7.7 represents the heptagonal tiling which has 3 heptagons around each vertex. It is also regular since all the polygons are the same size, so it can also be given the Schläfli symbol {7,3}. Uniform tilings may be regular (if also face- and edge-transitive), quasi-regular (if edge-transitive but not face-transitive) or semi-regular (if neither edge- nor face-transitive). For right triangles (p q 2), there are two regular tilings, represented by Schläfli symbol {p,q} and {q,p}. Wythoff construction There are an infinite number of uniform tilings based on the Schwarz triangles (p q r) where  +  +  < 1, where p, q, r are each orders of reflection symmetry at three points of the fundamental domain triangle – the symmetry group is a hyperbolic triangle group. Each symmetry family contains 7 uniform tilings, defined by a Wythoff symbol or Coxeter-Dynkin diagram, 7 representing combinations of 3 active mirrors. An 8th represents an alternation operation, deleting alternate vertices from the highest form with all mirrors active. Families with r = 2 contain regular hyperbolic tilings, defined by a Coxeter group such as [7,3], [8,3], [9,3], ... [5,4], [6,4], .... Hyperbolic families with r = 3 or higher are given by (p q r) and include (4 3 3), (5 3 3), (6 3 3) ... (4 4 3), (5 4 3), ... (4 4 4).... Hyperbolic triangles (p q r) define compact uniform hyperbolic til" https://en.wikipedia.org/wiki/Titanium%20oxide,"Titanium oxide may refer to: Titanium dioxide (titanium(IV) oxide), TiO2 Titanium(II) oxide (titanium monoxide), TiO, a non-stoichiometric oxide Titanium(III) oxide (dititanium trioxide), Ti2O3 Ti3O Ti2O δ-TiOx (x= 0.68–0.75) TinO2n−1 where n ranges from 3–9 inclusive, e.g. Ti3O5, Ti4O7, etc. Reduced titanium oxides A common reduced titanium oxide is TiO, also known as titanium monoxide. It can be prepared from titanium dioxide and titanium metal at 1500 °C. Ti3O5, Ti4O7, and Ti5O9 are non-stoichiometric oxides. These compounds are typically formed at high temperatures in the presence of excess oxygen. As a result, they exhibit unique structural and electronic properties, and have been studied for their potential use in various applications, including in gas sensors, lithium-ion batteries, and photocatalysis." https://en.wikipedia.org/wiki/Frame-dragging,"Frame-dragging is an effect on spacetime, predicted by Albert Einstein's general theory of relativity, that is due to non-static stationary distributions of mass–energy. A stationary field is one that is in a steady state, but the masses causing that field may be non-static ⁠— rotating, for instance. More generally, the subject that deals with the effects caused by mass–energy currents is known as gravitoelectromagnetism, which is analogous to the magnetism of classical electromagnetism. The first frame-dragging effect was derived in 1918, in the framework of general relativity, by the Austrian physicists Josef Lense and Hans Thirring, and is also known as the Lense–Thirring effect. They predicted that the rotation of a massive object would distort the spacetime metric, making the orbit of a nearby test particle precess. This does not happen in Newtonian mechanics for which the gravitational field of a body depends only on its mass, not on its rotation. The Lense–Thirring effect is very small – about one part in a few trillion. To detect it, it is necessary to examine a very massive object, or build an instrument that is very sensitive. In 2015, new general-relativistic extensions of Newtonian rotation laws were formulated to describe geometric dragging of frames which incorporates a newly discovered antidragging effect. Effects Rotational frame-dragging (the Lense–Thirring effect) appears in the general principle of relativity and similar theories in the vicinity of rotating massive objects. Under the Lense–Thirring effect, the frame of reference in which a clock ticks the fastest is one which is revolving around the object as viewed by a distant observer. This also means that light traveling in the direction of rotation of the object will move past the massive object faster than light moving against the rotation, as seen by a distant observer. It is now the best known frame-dragging effect, partly thanks to the Gravity Probe B experiment. Qualitatively, frame-d" https://en.wikipedia.org/wiki/Coefficient,"In mathematics, a coefficient is a multiplicative factor involved in some term of a polynomial, a series, or an expression. It may be a number (dimensionless), in which case it is known as a numerical factor. It may also be a constant with units of measurement, in which it is known as a constant multiplier. In general, coefficients may be any expression (including variables such as , and ). When the combination of variables and constants is not necessarily involved in a product, it may be called a parameter. For example, the polynomial has coefficients 2, −1, and 3, and the powers of the variable in the polynomial have coefficient parameters , , and . The , also known as constant term or simply constant is the quantity not attached to variables in an expression. For example, the constant coefficients of the expressions above are the number 3 and the parameter c, respectively. The coefficient attached to the highest degree of the variable in a polynomial is referred to as the leading coefficient. For example, in the expressions above, the leading coefficients are 2 and a, respectively. In the context of differential equations, an equation can often be written as equating to zero a polynomial in the unknown functions and their derivatives. In this case, the coefficients of the differential equation are the coefficients of this polynomial, and are generally non-constant functions. A coefficient is a constant coefficient when it is a constant function. For avoiding confusion, the coefficient that is not attached to unknown functions and their derivative is generally called the constant term rather the constant coefficient. In particular, in a linear differential equation with constant coefficient, the constant term is generally not supposed to be a constant function. Terminology and definition In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or any expression. For example, in the polynomial with variables an" https://en.wikipedia.org/wiki/Waru%20Waru,"Waru Waru is an Aymara term for the agricultural technique developed by pre-Hispanic people in the Andes region of South America from Ecuador to Bolivia; this regional agricultural technique is also referred to as camellones in Spanish. Functionally similar agricultural techniques have been developed in other parts of the world, all of which fall under the broad category of raised field agriculture. This type of altiplano field agriculture consists of parallel canals alternated by raised planting beds, which would be strategically located on floodplains or near a water source so that the fields could be properly irrigated. These flooded fields were composed of soil that was rich in nutrients due to the presence of aquatic plants and other organic materials. Through the process of mounding up this soil to create planting beds, natural, recyclable fertilizer was made available in a region where nitrogen-rich soils were rare. By trapping solar radiation during the day, this raised field agricultural method also protected crops from freezing overnight. These raised planting beds were irrigated very efficiently by the adjacent canals which extended the growing season significantly, allowing for more food yield. Waru Waru were able to yield larger amounts of food than previous agricultural methods due to the overall efficiency of the system. This technique is dated to around 300 B.C., and is most commonly associated with the Tiwanaku culture of the Lake Titicaca region in southern Bolivia, who used this method to grow crops like potatoes and quinoa. This type of agriculture also created artificial ecosystems, which attracted other food sources such as fish and lake birds. Past cultures in the Lake Titicaca region likely utilized these additional resources as a subsistence method. It combines raised beds with irrigation channels to prevent damage by soil erosion during floods. These fields ensure both collecting of water (either fluvial water, rainwater or phreatic" https://en.wikipedia.org/wiki/Ichnotaxon,"An ichnotaxon (plural ichnotaxa) is ""a taxon based on the fossilized work of an organism"", i.e. the non-human equivalent of an artifact. Ichnotaxa comes from the Greek , ichnos meaning track and , taxis meaning ordering. Ichnotaxa are names used to identify and distinguish morphologically distinctive ichnofossils, more commonly known as trace fossils. They are assigned genus and species ranks by ichnologists, much like organisms in Linnaean taxonomy. These are known as ichnogenera and ichnospecies, respectively. ""Ichnogenus"" and ""ichnospecies"" are commonly abbreviated as ""igen."" and ""isp."". The binomial names of ichnospecies and their genera are to be written in italics. Most researchers classify trace fossils only as far as the ichnogenus rank, based upon trace fossils that resemble each other in morphology but have subtle differences. Some authors have constructed detailed hierarchies up to ichnosuperclass, recognizing such fine detail as to identify ichnosuperorder and ichnoinfraclass, but such attempts are controversial. Naming Due to the chaotic nature of trace fossil classification, several ichnogenera hold names normally affiliated with animal body fossils or plant fossils. For example, many ichnogenera are named with the suffix -phycus due to misidentification as algae. Edward Hitchcock was the first to use the now common -ichnus suffix in 1858, with Cochlichnus. History Due to trace fossils' history of being difficult to classify, there have been several attempts to enforce consistency in the naming of ichnotaxa. In 1961, the International Commission on Zoological Nomenclature ruled that most trace fossil taxa named after 1930 would be no longer available. See also Bird ichnology Trace fossil classification Glossary of scientific naming" https://en.wikipedia.org/wiki/Recurrence%20period%20density%20entropy,"Recurrence period density entropy (RPDE) is a method, in the fields of dynamical systems, stochastic processes, and time series analysis, for determining the periodicity, or repetitiveness of a signal. Overview Recurrence period density entropy is useful for characterising the extent to which a time series repeats the same sequence, and is therefore similar to linear autocorrelation and time delayed mutual information, except that it measures repetitiveness in the phase space of the system, and is thus a more reliable measure based upon the dynamics of the underlying system that generated the signal. It has the advantage that it does not require the assumptions of linearity, Gaussianity or dynamical determinism. It has been successfully used to detect abnormalities in biomedical contexts such as speech signal. The RPDE value is a scalar in the range zero to one. For purely periodic signals, , whereas for purely i.i.d., uniform white noise, . Method description The RPDE method first requires the embedding of a time series in phase space, which, according to stochastic extensions to Taken's embedding theorems, can be carried out by forming time-delayed vectors: for each value xn in the time series, where M is the embedding dimension, and τ is the embedding delay. These parameters are obtained by systematic search for the optimal set (due to lack of practical embedding parameter techniques for stochastic systems) (Stark et al. 2003). Next, around each point in the phase space, an -neighbourhood (an m-dimensional ball with this radius) is formed, and every time the time series returns to this ball, after having left it, the time difference T between successive returns is recorded in a histogram. This histogram is normalised to sum to unity, to form an estimate of the recurrence period density function P(T). The normalised entropy of this density: is the RPDE value, where is the largest recurrence value (typically on the order of 1000 samples). Note that RPDE i" https://en.wikipedia.org/wiki/Carl%20Theodore%20Heisel,"Carl Theodore Heisel (1852–1937) was a mathematical crank who wrote several books in the 1930s challenging accepted mathematical truths. Among his claims is that he found a way to square the circle. He is credited with 24 works in 62 publications. Heisel did not charge money for his books; he gave thousands of them away for free. Because of this, they are available at many libraries and universities. Heisel's books have historic and monetary value. Paul Halmos referred to one of Heisel's works as a ""classic crank book."" Selected works" https://en.wikipedia.org/wiki/Index%20of%20cryptography%20articles,"Articles related to cryptography include: A A5/1 • A5/2 • ABA digital signature guidelines • ABC (stream cipher) • Abraham Sinkov • Acoustic cryptanalysis • Adaptive chosen-ciphertext attack • Adaptive chosen plaintext and chosen ciphertext attack • Advantage (cryptography) • ADFGVX cipher • Adi Shamir • Advanced Access Content System • Advanced Encryption Standard • Advanced Encryption Standard process • Adversary • AEAD block cipher modes of operation • Affine cipher • Agnes Meyer Driscoll • AKA (security) • Akelarre (cipher) • Alan Turing • Alastair Denniston • Al Bhed language • Alex Biryukov • Alfred Menezes • Algebraic Eraser • Algorithmically random sequence • Alice and Bob • All-or-nothing transform • Alphabetum Kaldeorum • Alternating step generator • American Cryptogram Association • AN/CYZ-10 • Anonymous publication • Anonymous remailer • Antoni Palluth • Anubis (cipher) • Argon2 • ARIA (cipher) • Arlington Hall • Arne Beurling • Arnold Cipher • Array controller based encryption • Arthur Scherbius • Arvid Gerhard Damm • Asiacrypt • Atbash • Attribute-based encryption • Attack model • Auguste Kerckhoffs • Authenticated encryption • Authentication • Authorization certificate • Autokey cipher • Avalanche effect B B-Dienst • Babington Plot • Baby-step giant-step • Bacon's cipher • Banburismus • Bart Preneel • BaseKing • BassOmatic • BATON • BB84 • Beale ciphers • BEAR and LION ciphers • Beaufort cipher • Beaumanor Hall • Bent function • Berlekamp–Massey algorithm • Bernstein v. United States • BestCrypt • Biclique attack • BID/60 • BID 770 • Bifid cipher • Bill Weisband • Binary Goppa code • Biometric word list • Birthday attack • Bit-flipping attack • BitTorrent protocol encryption • Biuro Szyfrów • Black Chamber • Blaise de Vigenère • Bletchley Park • Blind credential • Blinding (cryp" https://en.wikipedia.org/wiki/Coremark,"CoreMark is a benchmark that measures the performance of central processing units (CPU) used in embedded systems. It was developed in 2009 by Shay Gal-On at EEMBC and is intended to become an industry standard, replacing the Dhrystone benchmark. The code is written in C and contains implementations of the following algorithms: list processing (find and sort), matrix manipulation (common matrix operations), state machine (determine if an input stream contains valid numbers), and CRC. The code is under the Apache License 2.0 and is free of cost to use, but ownership is retained by the Consortium and publication of modified versions under the CoreMark name prohibited. Issues addressed by CoreMark The CRC algorithm serves a dual function; it provides a workload commonly seen in embedded applications and ensures correct operation of the CoreMark benchmark, essentially providing a self-checking mechanism. Specifically, to verify correct operation, a 16-bit CRC is performed on the data contained in elements of the linked list. To ensure compilers cannot pre-compute the results at compile time every operation in the benchmark derives a value that is not available at compile time. Furthermore, all code used within the timed portion of the benchmark is part of the benchmark itself (no library calls). CoreMark versus Dhrystone CoreMark draws on the strengths that made Dhrystone so resilient - it is small, portable, easy to understand, free, and displays a single number benchmark score. Unlike Dhrystone, CoreMark has specific run and reporting rules, and was designed to avoid the well understood issues that have been cited with Dhrystone. Major portions of Dhrystone are susceptible to a compiler’s ability to optimize the work away; thus it is more a compiler benchmark than a hardware benchmark. This also makes it very difficult to compare results when different compilers/flags are used. Library calls are made within the timed portion of Dhrystone. Typically, those library" https://en.wikipedia.org/wiki/Sensing%20of%20phage-triggered%20ion%20cascades,"Sensing of phage-triggered ion cascades (SEPTIC) is a prompt bacterium identification method based on fluctuation-enhanced sensing in fluid medium. The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing, and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different bacteria within a few minutes measurement window. The mechanism SEPTIC utilizes bacteriophages as indicators to trigger an ionic response by the bacteria during phage infection. Microscopic metal electrodes detect the random fluctuations of the electrochemical potential due to the stochastic fluctuations of the ionic concentration gradient caused by the phage infection of bacteria. The electrode pair in the electrolyte with different local ion concentrations at the vicinity of electrodes form an electrochemical cell that produces a voltage depending on the instantaneous ratio of local concentrations. While the concentrations are fluctuating, an alternating random voltage difference will appear between the electrodes. According to the experimental studies, whenever there is an ongoing phage infection, the power density spectrum of the measured electronic noise will have a noise spectrum while, without phage infection, it is a 1/f noise spectrum. In order to have a high sensitivity, a DC electrical field attracts the infected bacteria (which are charged due to ion imbalance) to the electrode with the relevant polarization. Advantages The advantages of SEPTIC are the specificity and speed (needs only a few minutes) offered by the characteristics of phage infection, the sensitivity due to fluctuation-enhanced sensing, and durability originating from the robustness of phages. An idealistic SEPTIC device may be as small as a pen and maybe able to identify a library of different b" https://en.wikipedia.org/wiki/Exterior%20calculus%20identities,"This article summarizes several identities in exterior calculus. Notation The following summarizes short definitions and notations that are used in this article. Manifold , are -dimensional smooth manifolds, where . That is, differentiable manifolds that can be differentiated enough times for the purposes on this page. , denote one point on each of the manifolds. The boundary of a manifold is a manifold , which has dimension . An orientation on induces an orientation on . We usually denote a submanifold by . Tangent and cotangent bundles , denote the tangent bundle and cotangent bundle, respectively, of the smooth manifold . , denote the tangent spaces of , at the points , , respectively. denotes the cotangent space of at the point . Sections of the tangent bundles, also known as vector fields, are typically denoted as such that at a point we have . Sections of the cotangent bundle, also known as differential 1-forms (or covector fields), are typically denoted as such that at a point we have . An alternative notation for is . Differential k-forms Differential -forms, which we refer to simply as -forms here, are differential forms defined on . We denote the set of all -forms as . For we usually write , , . -forms are just scalar functions on . denotes the constant -form equal to everywhere. Omitted elements of a sequence When we are given inputs and a -form we denote omission of the th entry by writing Exterior product The exterior product is also known as the wedge product. It is denoted by . The exterior product of a -form and an -form produce a -form . It can be written using the set of all permutations of such that as Directional derivative The directional derivative of a 0-form along a section is a 0-form denoted Exterior derivative The exterior derivative is defined for all . We generally omit the subscript when it is clear from the context. For a -form we have as the -form that gives the directi" https://en.wikipedia.org/wiki/Terminal%20%28electronics%29,"A terminal is the point at which a conductor from a component, device or network comes to an end. Terminal may also refer to an electrical connector at this endpoint, acting as the reusable interface to a conductor and creating a point where external circuits can be connected. A terminal may simply be the end of a wire or it may be fitted with a connector or fastener. In network analysis, terminal means a point at which connections can be made to a network in theory and does not necessarily refer to any physical object. In this context, especially in older documents, it is sometimes called a pole. On circuit diagrams, terminals for external connections are denoted by empty circles. They are distinguished from nodes or junctions which are entirely internal to the circuit, and are denoted by solid circles. All electrochemical cells have two terminals (electrodes) which are referred to as the anode and cathode or positive (+) and negative (-). On many dry batteries, the positive terminal (cathode) is a protruding metal cap and the negative terminal (anode) is a flat metal disc . In a galvanic cell such as a common AA battery, electrons flow from the negative terminal to the positive terminal, while the conventional current is opposite to this. Types of terminals Connectors Line splices Terminal strip, also known as a tag board or tag strip Solder cups or buckets Wire wrap connections (wire to board) Crimp terminals (ring, spade, fork, bullet, blade) Turret terminals for surface-mount circuits Crocodile clips Screw terminals and terminal blocks Wire nuts, a type of twist-on wire connector Leads on electronic components Battery terminals, often using screws or springs Electrical polarity See also Electrical connector - many terminals fall under this category Electrical termination - a method of signal conditioning" https://en.wikipedia.org/wiki/Structured%20ASIC%20platform,"Structured ASIC is an intermediate technology between ASIC and FPGA, offering high performance, a characteristic of ASIC, and low NRE cost, a characteristic of FPGA. Using Structured ASIC allows products to be introduced quickly to market, to have lower cost and to be designed with ease. In a FPGA, interconnects and logic blocks are programmable after fabrication, offering high flexibility of design and ease of debugging in prototyping. However, the capability of FPGAs to implement large circuits is limited, in both size and speed, due to complexity in programmable routing, and significant space occupied by programming elements, e.g. SRAMs, MUXes. On the other hand, ASIC design flow is expensive. Every different design needs a complete different set of masks. The Structured ASIC is a solution between these two. It has basically the same structure as a FPGA, but being mask-programmable instead of field-programmable, by configuring one or several via layers between metal layers. Every SRAM configuration bit can be replaced by a choice of putting a via or not between metal contacts. A number of commercial vendors have introduced structured ASIC products. They have a wide range of configurability, from a single via layer to 6 metal and 6 via layers. Altera's Hardcopy-II, eASIC's Nextreme are examples of commercial structured ASICs. See also Gate array Altera Corp - ""HardCopy II Structured ASICs"" eASIC Corp - ""Nextreme Structured ASIC""" https://en.wikipedia.org/wiki/Fluctuation-enhanced%20sensing,"Fluctuation-enhanced sensing (FES) is a specific type of chemical or biological sensing where the stochastic component, noise, of the sensor signal is analyzed. The stages following the sensor in a FES system typically contain filters and preamplifier(s) to extract and amplify the stochastic signal components, which are usually microscopic temporal fluctuations that are orders of magnitude weaker than the sensor signal. Then selected statistical properties of the amplified noise are analyzed, and a corresponding pattern is generated as the stochastic fingerprint of the sensed agent. Often the power density spectrum of the stochastic signal is used as output pattern however FES has been proven effective with more advanced methods, too, such as higher-order statistics. History During the 1990s, several authors (for example, Bruno Neri and coworkers, Peter Gottwald and Bela Szentpali) had proposed using the spectrum of measured noise to obtain information about ambient chemical conditions. However, the first systematic proposal for a generic electronic nose utilizing chemical sensors in FES mode, and the related mathematical analysis with experimental demonstration, were carried out only in 1999 by Laszlo B. Kish, Robert Vajtai and C.G. Granqvist at Uppsala University. The name ""fluctuation-enhanced sensing"" was created by John Audia (United States Navy), in 2001, after learning about the published scheme. In 2003, Alexander Vidybida from Bogolyubov Institute for Theoretical Physics of the National Academy of Sciences of Ukraine has proven mathematically that adsorption–desorption fluctuations during odor primary reception can be used for improving selectivity. During the years, FES has been developed and demonstrated in many studies with various types of sensors and agents in chemical and biological systems. Bacteria have also been detected and identified by FES, either by their odor in air, or by the ""SEPTIC"" method in liquid phase. In the period of 2006–2009 Sig" https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29,"Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practica" https://en.wikipedia.org/wiki/Asano%20contraction,"In complex analysis, a discipline in mathematics, and in statistical physics, the Asano contraction or Asano–Ruelle contraction is a transformation on a separately affine multivariate polynomial. It was first presented in 1970 by Taro Asano to prove the Lee–Yang theorem in the Heisenberg spin model case. This also yielded a simple proof of the Lee–Yang theorem in the Ising model. David Ruelle proved a general theorem relating the location of the roots of a contracted polynomial to that of the original. Asano contractions have also been used to study polynomials in graph theory. Definition Let be a polynomial which, when viewed as a function of only one of these variables is an affine function. Such functions are called separately affine. For example, is the general form of a separately affine function in two variables. Any separately affine function can be written in terms of any two of its variables as . The Asano contraction sends to . Location of zeroes Asano contractions are often used in the context of theorems about the location of roots. Asano originally used them because they preserve the property of having no roots when all the variables have magnitude greater than 1. Ruelle provided a more general relationship which allowed the contractions to be used in more applications. He showed that if there are closed sets not containing 0 such that cannot vanish unless for some index , then can only vanish if for some index or where . Ruelle and others have used this theorem to relate the zeroes of the partition function to zeroes of the partition function of its subsystems. Use Asano contractions can be used in statistical physics to gain information about a system from its subsystems. For example, suppose we have a system with a finite set of particles with magnetic spin either 1 or -1. For each site, we have a complex variable Then we can define a separately affine polynomial where , and is the energy of the state where only the sites in have " https://en.wikipedia.org/wiki/Food%20studies,"Food studies is the critical examination of food and its contexts within science, art, history, society, and other fields. It is distinctive from other food-related areas of study such as nutrition, agriculture, gastronomy, and culinary arts in that it tends to look beyond the consumption, production, and aesthetic appreciation of food and tries to illuminate food as it relates to a vast number of academic fields. It is thus a field that involves and attracts philosophers, historians, scientists, literary scholars, sociologists, art historians, anthropologists, and others. State of the field This is an interdisciplinary and emerging field, and as such there is a substantial crossover between academic and popular work. Practitioners reference best-selling authors, such as the journalist Michael Pollan, as well as scholars, such as the historian Warren Belasco and the anthropologist Sidney Mintz. While this makes the discipline somewhat volatile, it also makes it interesting and engaging. The journalist Paul Levy has noted, for example, that ""Food studies is a subject so much in its infancy that it would be foolish to try to define it or in any way circumscribe it, because the topic, discipline or method you rule out today might be tomorrow's big thing."" Research questions Qualitative research questions include: What impact does food have on the environment? What are the ethics of eating? How does food contribute to systems of oppression? How are foods symbolic markers of identity? At the same time practitioners may ask seemingly basic questions that are nonetheless fundamental to human existence. Who chooses what we eat and why? How are foods traditionally prepared—and where is the boundary between authentic culinary heritage and invented traditions? How is food integrated into classrooms? There are also questions of the spatialization of foodways and the relationship to place. This has led to the development of the concept of ""foodscape"" – introduced i" https://en.wikipedia.org/wiki/Pairwise%20error%20probability,"Pairwise error probability is the error probability that for a transmitted signal () its corresponding but distorted version () will be received. This type of probability is called ″pair-wise error probability″ because the probability exists with a pair of signal vectors in a signal constellation. It's mainly used in communication systems. Expansion of the definition In general, the received signal is a distorted version of the transmitted signal. Thus, we introduce the symbol error probability, which is the probability that the demodulator will make a wrong estimation of the transmitted symbol based on the received symbol, which is defined as follows: where is the size of signal constellation. The pairwise error probability is defined as the probability that, when is transmitted, is received. can be expressed as the probability that at least one is closer than to . Using the upper bound to the probability of a union of events, it can be written: Finally: Closed form computation For the simple case of the additive white Gaussian noise (AWGN) channel: The PEP can be computed in closed form as follows: is a Gaussian random variable with mean 0 and variance . For a zero mean, variance Gaussian random variable: Hence, See also Signal processing Telecommunication Electrical engineering Random variable" https://en.wikipedia.org/wiki/List%20of%20Feynman%20diagrams,"This is a list of common Feynman diagrams. Particle physics Physics-related lists" https://en.wikipedia.org/wiki/List%20of%20PowerPC-based%20game%20consoles,"There are several ways in which game consoles can be categorized. One is by its console generation, and another is by its computer architecture. Game consoles have long used specialized and customized computer hardware with the base in some standardized processor instruction set architecture. In this case, it is PowerPC and Power ISA, processor architectures initially developed in the early 1990s by the AIM alliance, i.e. Apple, IBM, and Motorola. Even though these consoles share much in regard to instruction set architecture, game consoles are still highly specialized computers so it is not common for games to be readily portable or compatible between devices. Only Nintendo has kept a level of portability between their consoles, and even there it is not universal. The first devices used standard processors, but later consoles used bespoke processors with special features, primarily developed by or in cooperation with IBM for the explicit purpose of being in a game console. In this regard, these computers can be considered ""embedded"". All three major consoles of the seventh generation were PowerPC based. As of 2019, no PowerPC-based game consoles are currently in production. The most recent release, Nintendo's Wii U, has since been discontinued and succeeded by the Nintendo Switch (which uses a Nvidia Tegra ARM processor). The PlayStation 3, the last PowerPC-based game console to remain in production, was discontinued in 2017. List See also PowerPC applications List of PowerPC processors" https://en.wikipedia.org/wiki/Perennation,"In botany, perennation is the ability of organisms, particularly plants, to survive from one germinating season to another, especially under unfavourable conditions such as drought or winter cold. It typically involves development of a perennating organ, which stores enough nutrients to sustain the organism during the unfavourable season, and develops into one or more new plants the following year. Common forms of perennating organs are storage organs (e.g. tubers, rhizomes and corm), and buds. Perennation is closely related with vegetative reproduction, as the organisms commonly use the same organs for both survival and reproduction. See also Overwintering Plant pathology Sclerotium Turion (botany)" https://en.wikipedia.org/wiki/Process%20map,"Process map is a global-system process model that is used to outline the processes that make up the business system and how they interact with each other. Process map shows the processes as objects, which means it is a static and non-algorithmic view of the processes. It should be differentiated from a detailed process model, which shows a dynamic and algorithmic view of the processes, usually known as a process flow diagram. There are different notation standards that can be used for modelling process maps, but the most notable ones are TOGAF Event Diagram, Eriksson-Penker notation, and ARIS Value Added Chain. Global process models Global characteristics of the business system are captured by global or system models. Global process models are presented using different methodologies and sometimes under different names. Most notably, they are named process map in Visual Paradigm and MMABP, value-added chain in ARIS, and process diagram in Eriksson-Penker notation – which can easily lead to the confusion with process flow (detailed process model). Global models are mainly object-oriented and present a static view of the business system, they do not describe dynamic aspects of processes. A process map shows the presence of processes and their mutual relationships. The requirement for the global perspective of the system as a supplementary to the internal process logic description results from the necessity of taking into consideration not only the internal process logic but also its significant surroundings. The algorithmic process model cannot take the place of this perspective since it represents the system model of the process. The detailed process model and the global process model represent different perspectives on the same business system, so these models must be mutually consistent. A macro process map represents the major processes required to deliver a product or service to the customer. These macro process maps can be further detailed in sub-diagrams. It" https://en.wikipedia.org/wiki/Animal,"Animals are multicellular, eukaryotic organisms in the biological kingdom Animalia. With few exceptions, animals consume organic material, breathe oxygen, have myocytes and are able to move, can reproduce sexually, and grow from a hollow sphere of cells, the blastula, during embryonic development. As of 2022, 2.16 million living animal species have been described—of which around 1.05 million are insects, over 85,000 are molluscs, and around 65,000 are vertebrates. It has been estimated there are around 7.77 million animal species. Animals range in length from to . They have complex interactions with each other and their environments, forming intricate food webs. The scientific study of animals is known as zoology. Most living animal species are in Bilateria, a clade whose members have a bilaterally symmetric body plan. The Bilateria include the protostomes, containing animals such as nematodes, arthropods, flatworms, annelids and molluscs, and the deuterostomes, containing the echinoderms and the chordates, the latter including the vertebrates. Life forms interpreted as early animals were present in the Ediacaran biota of the late Precambrian. Many modern animal phyla became clearly established in the fossil record as marine species during the Cambrian explosion, which began around 539 million years ago. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago. Historically, Aristotle divided animals into those with blood and those without. Carl Linnaeus created the first hierarchical biological classification for animals in 1758 with his Systema Naturae, which Jean-Baptiste Lamarck expanded into 14 phyla by 1809. In 1874, Ernst Haeckel divided the animal kingdom into the multicellular Metazoa (now synonymous with Animalia) and the Protozoa, single-celled organisms no longer considered animals. In modern times, the biological classification of animals relies on ad" https://en.wikipedia.org/wiki/Field%20metabolic%20rate,"Field metabolic rate (FMR) refers to a measurement of the metabolic rate of a free-living animal. Method Measurement of the Field metabolic rate is made using the doubly labeled water method, although alternative techniques, such as monitoring heart rates, can also be used. The advantages and disadvantages of the alternative approaches have been reviewed by Butler, et al. Several summary reviews have been published." https://en.wikipedia.org/wiki/General-Purpose%20Serial%20Interface,"General-Purpose Serial Interface, also known as GPSI, 7-wire interface, or 7WS, is a 7 wire communications interface. It is used as an interface between Ethernet MAC and PHY blocks. Data is received and transmitted using separate data paths (TXD, RXD) and separate data clocks (TXCLK, RXCLK). Other signals consist of transmit enable (TXEN), receive carrier sense (CRS), and collision (COL). See also Media-independent interface (MII)" https://en.wikipedia.org/wiki/Left%20and%20right%20%28algebra%29,"In algebra, the terms left and right denote the order of a binary operation (usually, but not always, called ""multiplication"") in non-commutative algebraic structures. A binary operation ∗ is usually written in the infix form: The argument  is placed on the left side, and the argument  is on the right side. Even if the symbol of the operation is omitted, the order of and does matter (unless ∗ is commutative). A two-sided property is fulfilled on both sides. A one-sided property is related to one (unspecified) of two sides. Although the terms are similar, left–right distinction in algebraic parlance is not related either to left and right limits in calculus, or to left and right in geometry. Binary operation as an operator A binary operation  may be considered as a family of unary operators through currying: , depending on  as a parameter – this is the family of right operations. Similarly, defines the family of left operations parametrized with . If for some , the left operation  is the identity operation, then is called a left identity. Similarly, if , then is a right identity. In ring theory, a subring which is invariant under any left multiplication in a ring is called a left ideal. Similarly, a right multiplication-invariant subring is a right ideal. Left and right modules Over non-commutative rings, the left–right distinction is applied to modules, namely to specify the side where a scalar (module element) appears in the scalar multiplication. The distinction is not purely syntactical because one gets two different associativity rules (the lowest row in the table) which link multiplication in a module with multiplication in a ring. A bimodule is simultaneously a left and right module, with two different scalar multiplication operations, obeying an associativity condition on them. Other examples Left eigenvectors Left and right group actions In category theory In category theory the usage of ""left"" and ""right"" has some algebraic resemblanc" https://en.wikipedia.org/wiki/Video%20line%20selector,"A video line selector is an electronic circuit or device for picking a line from an analog video signal. The input of the circuit is connected to an analog video source, the output triggers an oscilloscope, so display the selected line on the oscilloscope or similar device. Properties Video line selectors are circuits or units of other devices, fitted to the demand of the unit or a separate device for use in workshops, production and laboratories. They contain analog and digital circuits and an internal or external DC power supply. There's a video signal input, sometimes an output to prevent reflexions of the video signal and the cause of shadows of the video picture, also a trigger output. There is also an input or adjust for the line number(s) to be picked out and as an option an automatic or manual setting to fit other video standards and non-interlaced video. Video line selectors do not need all the picture signal, just the synchronisation signals are needed. Sometimes inputs for H- and V-sync were installed, only. Setup and References The video signal input is 75 Ω terminated or connected to the video output for a monitor. The amplified video signal is connected to the inputs of the H- und V-sync detector circuits. The H-sync detector outputs the horizontal synchronisation pulse filtered from the video signal. This is the line synchronisation and makes the lines fit vertically. The V-sync detector filters the vertical synchronisation and makes the picture fit the same position on the screen than the previous one. Both synchronisation output pulses are fed to a digital synchron counter. The V-sync resets the counter. The H-sync is being counted. On every frame picture, the counter is being reset and the lines were counted. Most often interlaced video was used, spitting up a picture in the odd numbered lines, followed by the even-numbered lines in a half picture each. (→deninterlacing). Interlace video requires a V-sync detector which detects first a second" https://en.wikipedia.org/wiki/Internetowy%20System%20Akt%C3%B3w%20Prawnych,"The Internetowy System Aktów Prawnych ( in Polish), shortly ISAP, is a database with information about the legislation in force in Poland, which is part of the oldest and one of the most famous Polish legal information systems, and is publicly available on the website of the Sejm of the Republic of Poland." https://en.wikipedia.org/wiki/Two-domain%20system,"The two-domain system is a biological classification by which all organisms in the tree of life are classified into two big domains, Bacteria and Archaea. It emerged from development of knowledge of archaea diversity and challenges to the widely accepted three-domain system that defines life into Bacteria, Archaea, and Eukarya. It was preceded by the eocyte hypothesis of James A. Lake in the 1980s, which was largely superseded by the three-domain system, due to evidence at the time. Better understanding of archaea, especially of their roles in the origin of eukaryotes through symbiogenesis with bacteria, led to the revival of the eocyte hypothesis in the 2000s. The two-domain system became more widely accepted after the discovery of a large group (superphylum) of archaea called Asgard in 2017, which evidence suggests to be the evolutionary root of eukaryotes, implying that eukaryotes are members of the domain Archaea. While the features of Asgard archaea do not directly rule out the three-domain system, the notion that eukaryotes originated from archaea and thus belong to Archaea has been strengthened by genetic and proteomic studies. Under the three-domain system, Eukarya is mainly distinguished by the presence of ""eukaryotic signature proteins"", that are not found in archaea and bacteria. However, Asgards contain genes that code for multiple such proteins, indicating that ""eukaryotic signature proteins"" originated in archaea. Background Classification of life into two main divisions is not a new concept, with the first such proposal by French biologist Édouard Chatton in 1938. Chatton distinguished organisms into: Procaryotes (including bacteria) Eucaryotes (including protozoans) These were later named empires, and Chatton's classification as the two-empire system. Chatton used the name Eucaryotes only for protozoans, excluded other eukaryotes, and published in limited circulation so that his work was not recognised. His classification was rediscovered by" https://en.wikipedia.org/wiki/Archaea,"Archaea ( ; : archaeon ) is a domain of single-celled organisms. These microorganisms lack cell nuclei and are therefore prokaryotes. Archaea were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), but this term has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Classification is difficult because most have not been isolated in a laboratory and have been detected only by their gene sequences in environmental samples. It is unknown if these are able to produce endospores. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat, square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more diverse energy sources than eukaryotes, ranging from organic compounds such as sugars, to ammonia, metal ions or even hydrogen gas. The salt-tolerant Haloarchaea use sunlight as an energy source, and other species of archaea fix carbon (autotrophy), but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores. The first observed archaea were extremophiles, living in extreme environments such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and " https://en.wikipedia.org/wiki/Embedded%20system,"An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts. Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. , it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems. Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP). Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase its reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale. Embedded systems range in size from portable personal devices such as digital watches and MP3 players to bigger machines like home appliances, industrial assembly lines, robots, transport vehicles, traffic light controllers, and medical imaging systems. Often they constitute subsystems of other machines like avionics in aircraft and astrionics in spacecraft. Large installations like factories, pipelines and electrical grids rely on multiple embedded systems networked together. Generalized through software customization, embed" https://en.wikipedia.org/wiki/Directory%20System%20Agent,"A Directory System Agent (DSA) is the element of an X.500 directory service that provides User Agents with access to a portion of the directory (usually the portion associated with a single Organizational Unit). X.500 is an international standard developed by the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU-T). The model and function of a directory system agent are specified in ITU-T Recommendation X.501. Active Directory In Microsoft's Active Directory the DSA is a collection of servers and daemon processes that run on Windows Server systems that provide various means for clients to access the Active Directory data store. Clients connect to an Active Directory DSA using various communications protocols: LDAP version 3.0—used by Windows 2000 and Windows XP clients LDAP version 2.0 Security Account Manager (SAM) interface—used by Windows NT clients MAPI RPC interface—used by Microsoft Exchange Server and other MAPI clients A proprietary RPC interface—used by Active Directory DSAs to communicate with one another and replicate data amongst themselves" https://en.wikipedia.org/wiki/List%20of%20exponential%20topics,"This is a list of exponential topics, by Wikipedia page. See also list of logarithm topics. Accelerating change Approximating natural exponents (log base e) Artin–Hasse exponential Bacterial growth Baker–Campbell–Hausdorff formula Cell growth Barometric formula Beer–Lambert law Characterizations of the exponential function Catenary Compound interest De Moivre's formula Derivative of the exponential map Doléans-Dade exponential Doubling time e-folding Elimination half-life Error exponent Euler's formula Euler's identity e (mathematical constant) Exponent Exponent bias Exponential (disambiguation) Exponential backoff Exponential decay Exponential dichotomy Exponential discounting Exponential diophantine equation Exponential dispersion model Exponential distribution Exponential error Exponential factorial Exponential family Exponential field Exponential formula Exponential function Exponential generating function Exponential-Golomb coding Exponential growth Exponential hierarchy Exponential integral Exponential integrator Exponential map (Lie theory) Exponential map (Riemannian geometry) Exponential map (discrete dynamical systems) Exponential notation Exponential object (category theory) Exponential polynomials—see also Touchard polynomials (combinatorics) Exponential response formula Exponential sheaf sequence Exponential smoothing Exponential stability Exponential sum Exponential time Sub-exponential time Exponential tree Exponential type Exponentially equivalent measures Exponentiating by squaring Exponentiation Fermat's Last Theorem Forgetting curve Gaussian function Gudermannian function Half-exponential function Half-life Hyperbolic function Inflation, inflation rate Interest Lambert W function Lifetime (physics) Limiting factor Lindemann–Weierstrass theorem " https://en.wikipedia.org/wiki/Time-lapse%20microscopy,"Time-lapse microscopy is time-lapse photography applied to microscopy. Microscope image sequences are recorded and then viewed at a greater speed to give an accelerated view of the microscopic process. Before the introduction of the video tape recorder in the 1960s, time-lapse microscopy recordings were made on photographic film. During this period, time-lapse microscopy was referred to as microcinematography. With the increasing use of video recorders, the term time-lapse video microscopy was gradually adopted. Today, the term video is increasingly dropped, reflecting that a digital still camera is used to record the individual image frames, instead of a video recorder. Applications Time-lapse microscopy can be used to observe any microscopic object over time. However, its main use is within cell biology to observe artificially cultured cells. Depending on the cell culture, different microscopy techniques can be applied to enhance characteristics of the cells as most cells are transparent. To enhance observations further, cells have therefore traditionally been stained before observation. Unfortunately, the staining process kills the cells. The development of less destructive staining methods and methods to observe unstained cells has led to that cell biologists increasingly observe living cells. This is known as live-cell imaging. A few tools have been developed to identify and analyze single cells during live-cell imaging. Time-lapse microscopy is the method that extends live-cell imaging from a single observation in time to the observation of cellular dynamics over long periods of time. Time-lapse microscopy is primarily used in research, but is clinically used in IVF clinics as studies has proven it to increase pregnancy rates, lower abortion rates and predict aneuploidy Modern approaches are further extending time-lapse microscopy observations beyond making movies of cellular dynamics. Traditionally, cells have been observed in a microscope and measured " https://en.wikipedia.org/wiki/Direct%20numerical%20control,"Direct numerical control (DNC), also known as distributed numerical control (also DNC), is a common manufacturing term for networking CNC machine tools. On some CNC machine controllers, the available memory is too small to contain the machining program (for example machining complex surfaces), so in this case the program is stored in a separate computer and sent directly to the machine, one block at a time. If the computer is connected to a number of machines it can distribute programs to different machines as required. Usually, the manufacturer of the control provides suitable DNC software. However, if this provision is not possible, some software companies provide DNC applications that fulfill the purpose. DNC networking or DNC communication is always required when CAM programs are to run on some CNC machine control. Wireless DNC is also used in place of hard-wired versions. Controls of this type are very widely used in industries with significant sheet metal fabrication, such as the automotive, appliance, and aerospace industries. History 1950s-1970s Programs had to be walked to NC controls, generally on paper tape. NC controls had paper tape readers precisely for this purpose. Many companies were still punching programs on paper tape well into the 1980s, more than twenty-five years after its elimination in the computer industry. 1980s The focus in the 1980s was mainly on reliably transferring NC programs between a host computer and the control. The Host computers would frequently be Sun Microsystems, HP, Prime, DEC or IBM type computers running a variety of CAD/CAM software. DNC companies offered machine tool links using rugged proprietary terminals and networks. For example, DLog offered an x86 based terminal, and NCPC had one based on the 6809. The host software would be responsible for tracking and authorising NC program modifications. Depending on program size, for the first time operators had the opportunity to modify programs at the DNC terminal. No" https://en.wikipedia.org/wiki/List%20of%20heaviest%20people,"This is a list of the heaviest people who have been weighed and verified, living and dead. The list is organised by the peak weight reached by an individual and is limited to those who are over . Heaviest people ever recorded See also Big Pun (1971–2000), American rapper whose weight at death was . Edward Bright (1721–1750) and Daniel Lambert (1770–1809), men from England who were famous in their time for their obesity. Happy Humphrey, the heaviest professional wrestler, weighing in at at his peak. Israel Kamakawiwoʻole (1959–1997), Hawaiian singer whose weight peaked at . Paul Kimelman (born 1947), holder of Guinness World Record for the greatest weight-loss in the shortest amount of time, 1982 Billy and Benny McCrary, holders of Guinness World Records's World's Heaviest Twins. Alayna Morgan (1948–2009), heavy woman from Santa Rosa, California. Ricky Naputi (1973–2012), heaviest man from Guam. Carl Thompson (1982–2015), heaviest man in the United Kingdom whose weight at death was . Renee Williams (1977–2007), woman from Austin, Texas. Yokozuna, the heaviest WWE wrestler, weighing between and at his peak. Barry Austin and Jack Taylor, two obese British men documented in the comedy-drama The Fattest Man in Britain. Yamamotoyama Ryūta, heaviest Japanese-born sumo wrestler; is also thought to be the heaviest Japanese person ever at ." https://en.wikipedia.org/wiki/Matching%20pursuit,"Matching pursuit (MP) is a sparse approximation algorithm which finds the ""best matching"" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating and is denoted by . If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used. For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals . By taking an extremely redundant dictionary, we can look" https://en.wikipedia.org/wiki/Register%20file,"A register file is an array of processor registers in a central processing unit (CPU). Register banking is the method of using a single name to access multiple different physical registers depending on the operating mode. Modern integrated circuit-based register files are usually implemented by way of fast static RAMs with multiple ports. Such RAMs are distinguished by having dedicated read and write ports, whereas ordinary multiported SRAMs will usually read and write through the same ports. The instruction set architecture of a CPU will almost always define a set of registers which are used to stage data between memory and the functional units on the chip. In simpler CPUs, these architectural registers correspond one-for-one to the entries in a physical register file (PRF) within the CPU. More complicated CPUs use register renaming, so that the mapping of which physical entry stores a particular architectural register changes dynamically during execution. The register file is part of the architecture and visible to the programmer, as opposed to the concept of transparent caches. Register-bank switching Register files may be clubbed together as register banks. A processor may have more than one register bank. ARM processors have both banked and unbanked registers. While all modes always share the same physical registers for the first eight general-purpose registers, R0 to R7, the physical register which the banked registers, R8 to R14, point to depends on the operating mode the processor is in. Notably, Fast Interrupt Request (FIQ) mode has its own bank of registers for R8 to R12, with the architecture also providing a private stack pointer (R13) for every interrupt mode. x86 processors use context switching and fast interrupt for switching between instruction, decoder, GPRs and register files, if there is more than one, before the instruction is issued, but this is only existing on processors that support superscalar. However, context switching is a totall" https://en.wikipedia.org/wiki/Programmable%20Array%20Logic,"Programmable Array Logic (PAL) is a family of programmable logic device semiconductors used to implement logic functions in digital circuits introduced by Monolithic Memories, Inc. (MMI) in March 1978. MMI obtained a registered trademark on the term PAL for use in ""Programmable Semiconductor Logic Circuits"". The trademark is currently held by Lattice Semiconductor. PAL devices consisted of a small PROM (programmable read-only memory) core and additional output logic used to implement particular desired logic functions with few components. Using specialized machines, PAL devices were ""field-programmable"". PALs were available in several variants: ""One-time programmable"" (OTP) devices could not be updated and reused after initial programming (MMI also offered a similar family called HAL, or ""hard array logic"", which were like PAL devices except that they were mask-programmed at the factory.). UV erasable versions (e.g.: PALCxxxxx e.g.: PALC22V10) had a quartz window over the chip die and could be erased for re-use with an ultraviolet light source just like an EPROM. Later versions (PALCExxx e.g.: PALCE22V10) were flash erasable devices. In most applications, electrically-erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs. History Before PALs were introduced, designers of digital logic circuits would use small-scale integration (SSI) components, such as those in the 7400 series TTL (transistor-transistor logic) family; the 7400 family included a variety of logic building blocks, such as gates (NOT, NAND, NOR, AND, OR), multiplexers (MUXes) and demultiplexers (DEMUXes), flip flops (D-type, JK, etc.) and others. One PAL device would typically replace dozens of such ""discrete"" logic packages, so the SSI business declined as the PAL business took off. PALs were used advantageously in many products, such as minicomputers, as documented in Tracy Kidder's best-selling book The Soul of a New Machine. PALs were not the " https://en.wikipedia.org/wiki/Signal%20transfer%20function,"The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives a measure of the noise of an imaging system, and thus yields one assessment of its performance. SiTF evaluation In evaluating the SiTF curve, the signal input and signal output are measured differentially; meaning, the differential of the input signal and differential of the output signal are calculated and plotted against each other. An operator, using computer software, defines an arbitrary area, with a given set of data points, within the signal and background regions of the output image of the infrared sensor, i.e. of the unit under test (UUT), (see ""Half Moon"" image below). The average signal and background are calculated by averaging the data of each arbitrarily defined region. A second order polynomial curve is fitted to the data of each line. Then, the polynomial is subtracted from the average signal and background data to yield the new signal and background. The difference of the new signal and background data is taken to yield the net signal. Finally, the net signal is plotted versus the signal input. The signal input of the UUT is within its own spectral response. (e.g. color-correlated temperature, pixel intensity, etc.). The slope of the linear portion of this curve is then found using the method of least squares. SiTF curve The net signal is calculated from the average signal and background, as in signal to noise ratio (imaging)#Calculations. The SiTF curve is then given by the signal output data, (net signal data), plotted against the signal input data (see graph of SiTF to the right). All the data points in the linear region of the SiTF curve can be used in the method of least squares to find a linear approximation. Given data points a best fit line parameterized as is given by: See also Optica" https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20stability%20criterion,"In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants () than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial. The importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly. The Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently. Using Euclid's algorithm The criterion is rela" https://en.wikipedia.org/wiki/Integrated%20Geo%20Systems,"Integrated Geo Systems (IGS) is a computational architecture system developed for managing geoscientific data through systems and data integration. Geosciences often involve large volumes of diverse data which have to be processed by computer and graphics intensive applications. The processes involved in processing these large datasets are often so complex that no single applications software can perform all the required tasks. Specialized applications have emerged for specific tasks. To get the required results, it is necessary that all applications software involved in various stages of data processing, analysis and interpretation effectively communicate with each other by sharing data. IGS provides a framework for maintaining an electronic workflow between various geoscience software applications through data connectivity. The main components of IGS are: Geographic information systems as a front end. Format engine for data connectivity link between various geoscience software applications. The format engine uses Output Input Language (OIL), an interpreted language, to define various data formats. An array of geoscience relational databases for data integration. Data highways as internal data formats for each data type. Specialized geoscience applications software as processing modules. Geoscientific processing libraries External links Geological Society Books American Association of Petroleum Geologists Book Store Integrated Geo Systems Research Paper Computer systems" https://en.wikipedia.org/wiki/Principal%20%28computer%20security%29,"A principal in computer security is an entity that can be authenticated by a computer system or network. It is referred to as a security principal in Java and Microsoft literature. Principals can be individual people, computers, services, computational entities such as processes and threads, or any group of such things. They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. A principal typically has an associated identifier (such as a security identifier) that allows it to be referenced for identification or assignment of properties and permissions." https://en.wikipedia.org/wiki/Service-oriented%20software%20engineering,"Service-oriented Software Engineering (SOSE), also referred to as service engineering, is a software engineering methodology focused on the development of software systems by composition of reusable services (service-orientation) often provided by other service providers. Since it involves composition, it shares many characteristics of component-based software engineering, the composition of software systems from reusable components, but it adds the ability to dynamically locate necessary services at run-time. These services may be provided by others as web services, but the essential element is the dynamic nature of the connection between the service users and the service providers. Service-oriented interaction pattern There are three types of actors in a service-oriented interaction: service providers, service users and service registries. They participate in a dynamic collaboration which can vary from time to time. Service providers are software services that publish their capabilities and availability with service registries. Service users are software systems (which may be services themselves) that accomplish some task through the use of services provided by service providers. Service users use service registries to discover and locate the service providers they can use. This discovery and location occurs dynamically when the service user requests them from a service registry. See also Service-oriented architecture (SOA) Service-oriented analysis and design Separation of concerns Component-based software engineering Web services" https://en.wikipedia.org/wiki/Mathematical%20proof,"A mathematical proof is a deductive argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion. The argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Proofs are examples of exhaustive deductive reasoning which establish logical certainty, to be distinguished from empirical arguments or non-exhaustive inductive reasoning which establish ""reasonable expectation"". Presenting many cases in which the statement holds is not enough for a proof, which must demonstrate that the statement is true in all possible cases. A proposition that has not been proved but is believed to be true is known as a conjecture, or a hypothesis if frequently used as an assumption for further mathematical work. Proofs employ logic expressed in mathematical symbols, along with natural language which usually admits some ambiguity. In most mathematical literature, proofs are written in terms of rigorous informal logic. Purely formal proofs, written fully in symbolic language without the involvement of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics, oral traditions in the mainstream mathematical community or in other cultures. The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language. History and etymology The word ""proof"" comes from the Latin probare (to test). Related modern words are English ""probe"", ""probation"", and ""probability"", Spanish probar (to smell or taste, or sometimes touch or test), Italian provare (to try), and German probieren (to try). The legal term ""probity"" means authority or credibility, th" https://en.wikipedia.org/wiki/Path%20computation%20element,"In computer networks, a path computation element (PCE) is a system component, application, or network node that is capable of determining and finding a suitable route for conveying data between a source and a destination. Description Routing can be subject to a set of constraints, such as quality of service (QoS), policy, or price. Constraint-based path computation is a strategic component of traffic engineering in MPLS, GMPLS and Segment Routing networks. It is used to determine the path through the network that traffic should follow, and provides the route for each label-switched path (LSP) that is set up. Path computation has previously been performed either in a management system or at the head end of each LSP. But path computation in large, multi-domain networks may be very complex and may require more computational power and network information than is typically available at a network element, yet may still need to be more dynamic than can be provided by a management system. Thus, a PCE is an entity capable of computing paths for a single or set of services. A PCE might be a network node, network management station, or dedicated computational platform that is resource-aware and has the ability to consider multiple constraints for sophisticated path computation. PCE applications compute label-switched paths for MPLS and GMPLS traffic engineering. The various components of the PCE architecture are in the process of being standardized by the IETF's PCE Working Group. PCE represents a vision of networks that separates route computations from the signaling of end-to-end connections and from actual packet forwarding. There is a basic tutorial on PCE as presented at ISOCORE's MPLS2008 conference and a tutorial on advanced PCE as presented at ISOCORE's SDN/MPLS 2014 conference. Since the early days, the PCE architecture has evolved considerably to encompass more sophisticated concepts and allow application to more complicated network scenarios. This evolution inc" https://en.wikipedia.org/wiki/Reflectometry,"Reflectometry is a general term for the use of the reflection of waves or pulses at surfaces and interfaces to detect or characterize objects, sometimes to detect anomalies as in fault detection and medical diagnosis. There are many different forms of reflectometry. They can be classified in several ways: by the used radiation (electromagnetic, ultrasound, particle beams), by the geometry of wave propagation (unguided versus wave guides or cables), by the involved length scales (wavelength and penetration depth in relation to size of the investigated object), by the method of measurement (continuous versus pulsed, polarization resolved, ...), and by the application domain. Radiation sources Electromagnetic radiation of widely varying wavelength is used in many different forms of reflectometry: Radar: Reflections of radiofrequency pulses are used to detect the presence and to measure the location and speed of objects such as aircraft, missiles, ships, vehicles. Lidar: Reflections of light pulses are used typically to penetrate ground cover by vegetation in aerial archaeological surveys. Characterization of semiconductor and dielectric thin films: Analysis of reflectance data utilizing the Forouhi Bloomer dispersion equations can determine the thickness, refractive index, and extinction coefficient of thin films utilized in the semiconductor industry. X-ray reflectometry: is a surface-sensitive analytical technique used in chemistry, physics, and materials science to characterize surfaces, thin films and multilayers. Propagation of electric pulses and reflection at discontinuities in cables is used in time domain reflectometry (TDR) to detect and localize defects in electric wiring. Skin reflectance: In anthropology, reflectometry devices are often used to gauge human skin color through the measurement of skin reflectance. These devices are typically pointed at the upper arm or forehead, with the emitted waves then interpreted at various percentages. Lower fr" https://en.wikipedia.org/wiki/Safety-critical%20system,"A safety-critical system or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes: death or serious injury to people loss or severe damage to equipment/property environmental harm A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom. Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based. Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation. Reliability regimes Several reliability regimes for safety-critical systems exist: Fail-operational systems continue to operate when their" https://en.wikipedia.org/wiki/Graduate%20Texts%20in%20Mathematics,"Graduate Texts in Mathematics (GTM) () is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard size (with variable numbers of pages). The GTM series is easily identified by a white band at the top of the book. The books in this series tend to be written at a more advanced level than the similar Undergraduate Texts in Mathematics series, although there is a fair amount of overlap between the two series in terms of material covered and difficulty level. List of books Introduction to Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring (1982, 2nd ed., ) Measure and Category – A Survey of the Analogies between Topological and Measure Spaces, John C. Oxtoby (1980, 2nd ed., ) Topological Vector Spaces, H. H. Schaefer, M. P. Wolff (1999, 2nd ed., ) A Course in Homological Algebra, Peter Hilton, Urs Stammbach (1997, 2nd ed., ) Categories for the Working Mathematician, Saunders Mac Lane (1998, 2nd ed., ) Projective Planes, Daniel R. Hughes, Fred C. Piper, (1982, ) A Course in Arithmetic, Jean-Pierre Serre (1996, ) Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring, (1973, ) Introduction to Lie Algebras and Representation Theory, James E. Humphreys (1997, ) A Course in Simple-Homotopy Theory, Marshall. M. Cohen, (1973, ) Functions of One Complex Variable I, John B. Conway (1978, 2nd ed., ) Advanced Mathematical Analysis, Richard Beals (1973, ) Rings and Categories of Modules, Frank W. Anderson, Kent R. Fuller (1992, 2nd ed., ) Stable Mappings and Their Singularities, Martin Golubitsky, Victor Guillemin, (1974, ) Lectures in Functional Analysis and Operator Theory, Sterling K. Berberian, (1974, ) The Structure of Fields, David J. Winter, (1974, ) Random Processes, Murray Rosenblatt, (1974, ) Measure Theory, Paul R. Halmos (1974, ) A Hilbert Space Problem Book, Paul R. Halmos (1982, 2nd ed., ) Fibre Bundles, Dale Husemoller (1994, " https://en.wikipedia.org/wiki/Antibody%20testing,"Antibody testing may refer to: Serological testing, tests that detect specific antibodies in the blood Immunoassay, tests that use antibodies to detect substances Antibody titer, tests that measure the amount of a specific antibody in a sample Antibodies Biological techniques and tools" https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist%20noise,"Johnson–Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) is the electronic noise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment (such as radio receivers) can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise increases with temperature. Some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to reduce thermal noise in their circuits. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium. Thermal noise in an ideal resistor is approximately white, meaning that the power spectral density is nearly constant throughout the frequency spectrum, but does decay to zero at extremely high frequencies (terahertz for room temperature). When limited to a finite bandwidth, thermal noise has a nearly Gaussian amplitude distribution. History This type of noise was discovered and first measured by John B. Johnson at Bell Labs in 1926. He described his findings to Harry Nyquist, also at Bell Labs, who was able to explain the results. Derivation As Nyquist stated in his 1928 paper, the sum of the energy in the normal modes of electrical oscillation would determine the amplitude of the noise. Nyquist used the equipartition law of Boltzmann and Maxwell. Using the concept potential energy and harmonic oscillators of the equipartition law, where is the noise power density in (W/Hz), is the Boltzmann constant and is the temperature. Multiplying the equation by bandwidth gives the result as noise power. where N is the noise power and Δf is the bandwidth. Noise voltage and power The" https://en.wikipedia.org/wiki/Clipping%20%28signal%20processing%29,"Clipping is a form of distortion that limits a signal once it exceeds a threshold. Clipping may occur when a signal is recorded by a sensor that has constraints on the range of data it can measure, it can occur when a signal is digitized, or it can occur any other time an analog or digital signal is transformed, particularly in the presence of gain or overshoot and undershoot. Clipping may be described as hard, in cases where the signal is strictly limited at the threshold, producing a flat cutoff; or it may be described as soft, in cases where the clipped signal continues to follow the original at a reduced gain. Hard clipping results in many high-frequency harmonics; soft clipping results in fewer higher-order harmonics and intermodulation distortion components. Audio In the frequency domain, clipping produces strong harmonics in the high-frequency range (as the clipped waveform comes closer to a squarewave). The extra high-frequency weighting of the signal could make tweeter damage more likely than if the signal was not clipped. Many electric guitar players intentionally overdrive their amplifiers (or insert a ""fuzz box"") to cause clipping in order to get a desired sound (see guitar distortion). In general, the distortion associated with clipping is unwanted, and is visible on an oscilloscope even if it is inaudible. Images In the image domain, clipping is seen as desaturated (washed-out) bright areas that turn to pure white if all color components clip. In digital colour photography, it is also possible for individual colour channels to clip, which results in inaccurate colour reproduction. Causes Analog circuitry A circuit designer may intentionally use a clipper or clamper to keep a signal within a desired range. When an amplifier is pushed to create a signal with more power than it can support, it will amplify the signal only up to its maximum capacity, at which point the signal will be amplified no further. An integrated circuit or discrete " https://en.wikipedia.org/wiki/Social%20Bonding%20and%20Nurture%20Kinship,"Social Bonding and Nurture Kinship: Compatibility between Cultural and Biological Approaches is a book on human kinship and social behavior by Maximilian Holland, published in 2012. The work synthesizes the perspectives of evolutionary biology, psychology and sociocultural anthropology towards understanding human social bonding and cooperative behavior. It presents a theoretical treatment that many consider to have resolved longstanding questions about the proper place of genetic (or 'blood') connections in human kinship and social relations, and a synthesis that ""should inspire more nuanced ventures in applying Darwinian approaches to sociocultural anthropology"". The book has been called ""A landmark in the field of evolutionary biology"" which ""gets to the heart of the matter concerning the contentious relationship between kinship categories, genetic relatedness and the prediction of behavior"", ""places genetic determinism in the correct perspective"" and serves as ""a shining example of what can be achieved when excellent scholars engage fully across disciplinary boundaries."" The aim of the book is to show that ""properly interpreted, cultural anthropology approaches (and ethnographic data) and biological approaches are perfectly compatible regarding processes of social bonding in humans."" Holland's position is based on demonstrating that the dominant biological theory of social behavior (inclusive fitness theory) is typically misunderstood to predict that genetic ties are necessary for the expression of social behaviors, whereas in fact the theory only implicates genetic associations as necessary for the evolution of social behaviors. Whilst rigorous evolutionary biologists have long understood the distinction between these levels of analysis (see Tinbergen's four questions), past attempts to apply inclusive fitness theory to humans have often overlooked the distinction between evolution and expression. Beyond its central argument, the broader philosophical implic" https://en.wikipedia.org/wiki/Universality%E2%80%93diversity%20paradigm,"The universality–diversity paradigm is the analysis of biological materials based on the universality and diversity of its fundamental structural elements and functional mechanisms. The analysis of biological systems based on this classification has been a cornerstone of modern biology. For example, proteins constitute the elementary building blocks of a vast variety of biological materials such as cells, spider silk or bone, where they create extremely robust, multi-functional materials by self-organization of structures over many length- and time scales, from nano to macro. Some of the structural features are commonly found in many different tissues, that is, they are conservation|highly conserved. Examples of such universal building blocks include alpha-helices, beta-sheets or tropocollagen molecules. In contrast, other features are highly specific to tissue types, such as particular filament assemblies, beta-sheet nanocrystals in spider silk or tendon fascicles. This coexistence of universality and diversity—referred to as the universality–diversity paradigm (UDP)—is an overarching feature in biological materials and a crucial component of materiomics. It might provide guidelines for bioinspired and biomimetic material development, where this concept is translated into the use of inorganic or hybrid organic-inorganic building blocks. See also Materiomics Phylogenetics" https://en.wikipedia.org/wiki/List%20of%20Lie%20groups%20topics,"This is a list of Lie group topics, by Wikipedia page. Examples See Table of Lie groups for a list General linear group, special linear group SL2(R) SL2(C) Unitary group, special unitary group SU(2) SU(3) Orthogonal group, special orthogonal group Rotation group SO(3) SO(8) Generalized orthogonal group, generalized special orthogonal group The special unitary group SU(1,1) is the unit sphere in the ring of coquaternions. It is the group of hyperbolic motions of the Poincaré disk model of the Hyperbolic plane. Lorentz group Spinor group Symplectic group Exceptional groups G2 F4 E6 E7 E8 Affine group Euclidean group Poincaré group Heisenberg group Lie algebras Commutator Jacobi identity Universal enveloping algebra Baker-Campbell-Hausdorff formula Casimir invariant Killing form Kac–Moody algebra Affine Lie algebra Loop algebra Graded Lie algebra Foundational results One-parameter group, One-parameter subgroup Matrix exponential Infinitesimal transformation Lie's third theorem Maurer–Cartan form Cartan's theorem Cartan's criterion Local Lie group Formal group law Hilbert's fifth problem Hilbert-Smith conjecture Lie group decompositions Real form (Lie theory) Complex Lie group Complexification (Lie group) Semisimple theory Simple Lie group Compact Lie group, Compact real form Semisimple Lie algebra Root system Simply laced group ADE classification Maximal torus Weyl group Dynkin diagram Weyl character formula Representation theory Representation of a Lie group Representation of a Lie algebra Adjoint representation of a Lie group Adjoint representation of a Lie algebra Unitary representation Weight (representation theory) Peter–Weyl theorem Borel–Weil theorem Kirillov character formula Representation theory of SU(2) Representation theory of SL2(R) Applications Physical theories Pauli matrices Gell-Mann matrices Poisson bracket Noether's theorem Wigner's classification Gauge theory Grand unification theory Supergroup Lie superalgebra Twistor theory Anyon Witt " https://en.wikipedia.org/wiki/Reproductive%20biology,"Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc" https://en.wikipedia.org/wiki/Packet%20switching,"In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide. During the early 1960s, Polish-American engineer Paul Baran developed a concept he called ""distributed adaptive message block switching"", with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965. Davies coined the modern term packet switching and inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet. Concept A simple definition of packet switching is: Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput de" https://en.wikipedia.org/wiki/List%20of%20irreducible%20Tits%20indices,"In the mathematical theory of linear algebraic groups, a Tits index (or index) is an object used to classify semisimple algebraic groups defined over a base field k, not assumed to be algebraically closed. The possible irreducible indices were classified by Jacques Tits, and this classification is reproduced below. (Because every index is a direct sum of irreducible indices, classifying all indices amounts to classifying irreducible indices.) Organization of the list An index can be represented as a Dynkin diagram with certain vertices drawn close to each other (the orbit of the vertices under the *-action of the Galois group of k) and with certain sets of vertices circled (the orbits of the non-distinguished vertices under the *-action). This representation captures the full information of the index except when the underlying Dynkin diagram is D4, in which case one must distinguish between an action by the cyclic group C3 or the permutation group S3. Alternatively, an index can be represented using the name of the underlying Dykin diagram together with additional superscripts and subscripts, to be explained momentarily. This representation, together with the labeled Dynkin diagram described in the previous paragraph, captures the full information of the index. The notation for an index is of the form gX, where X is the letter of the underlying Dynkin diagram (A, B, C, D, E, F, or G), n is the number of vertices of the Dynkin diagram, r is the relative rank of the corresponding algebraic group, g is the order of the quotient of the absolute Galois group that acts faithfully on the Dynkin diagram (so g = 1, 2, 3, or 6), and t is either the degree of a certain division algebra (that is, the square root of its dimension) arising in the construction of the algebraic group when the group is of classical type (A, B, C, or D), in which case t is written in parentheses, or the dimension of the anisotropic kernel of the algebraic group when the group is of excepti" https://en.wikipedia.org/wiki/Landrace,"A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds. A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use. Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure. Characteristics There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the cla" https://en.wikipedia.org/wiki/IP%20connectivity%20access%20network,"IP-CAN (or IP connectivity access network) is an access network that provides Internet Protocol (IP) connectivity. The term is usually used in cellular context and usually refers to 3GPP access networks such as GPRS or EDGE, but can be also used to describe wireless LAN (WLAN) or DSL networks. It was introduced in 3GPP IP Multimedia Subsystem (IMS) standards as a generic term referring to any kind of IP-based access network as IMS put much emphasis on access and service network separation. See also IP multimedia subsystem Radio access network" https://en.wikipedia.org/wiki/Projection%20%28mathematics%29,"In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined. The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection with the plane of the line parallel to D passing through P. See for an accurate definition, generalized to any dimension. The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the con" https://en.wikipedia.org/wiki/Baking,"Baking is a method of preparing food that uses dry heat, typically in an oven, but can also be done in hot ashes, or on hot stones. The most common baked item is bread, but many other types of foods can be baked. Heat is gradually transferred ""from the surface of cakes, cookies, and pieces of bread to their center. As heat travels through, it transforms batters and doughs into baked goods and more with a firm dry crust and a softer center"". Baking can be combined with grilling to produce a hybrid barbecue variant by using both methods simultaneously, or one after the other. Baking is related to barbecuing because the concept of the masonry oven is similar to that of a smoke pit. Baking has traditionally been performed at home for day-to-day meals and in bakeries and restaurants for local consumption. When production was industrialized, baking was automated by machines in large factories. The art of baking remains a fundamental skill and is important for nutrition, as baked goods, especially bread, are a common and important food, both from an economic and cultural point of view. A person who prepares baked goods as a profession is called a baker. On a related note, a pastry chef is someone who is trained in the art of making pastries, cakes, desserts, bread, and other baked goods. Foods and techniques All types of food can be baked, but some require special care and protection from direct heat. Various techniques have been developed to provide this protection. In addition to bread, baking is used to prepare cakes, pastries, pies, tarts, quiches, cookies, scones, crackers, pretzels, and more. These popular items are known collectively as ""baked goods,"" and are often sold at a bakery, which is a store that carries only baked goods, or at markets, grocery stores, farmers markets or through other venues. Meat, including cured meats, such as ham can also be baked, but baking is usually reserved for meatloaf, smaller cuts of whole meats, or whole meats that contain s" https://en.wikipedia.org/wiki/Curie%27s%20principle,"Curie's principle, or Curie's symmetry principle, is a maxim about cause and effect formulated by Pierre Curie in 1894: The idea was based on the ideas of Franz Ernst Neumann and Bernhard Minnigerode. Thus, it is sometimes known as the Neuman–Minnigerode–Curie Principle." https://en.wikipedia.org/wiki/Table%20of%20mathematical%20symbols%20by%20introduction%20date,"The following table lists many specialized symbols commonly used in modern mathematics, ordered by their introduction date. The table can also be ordered alphabetically by clicking on the relevant header title. See also History of mathematical notation History of the Hindu–Arabic numeral system Glossary of mathematical symbols List of mathematical symbols by subject Mathematical notation Mathematical operators and symbols in Unicode Sources External links RapidTables: Math Symbols List Jeff Miller: Earliest Uses of Various Mathematical Symbols Symbols by introduction date Symbols" https://en.wikipedia.org/wiki/Kronecker%20delta,"In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because . The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above. In linear algebra, the identity matrix has entries equal to the Kronecker delta: where and take the values , and the inner product of vectors can be written as Here the Euclidean vectors are defined as -tuples: and and the last step is obtained by using the values of the Kronecker delta to reduce the summation over . It is common for and to be restricted to a set of the form or , but the Kronecker delta can be defined on an arbitrary set. Properties The following equations are satisfied: Therefore, the matrix can be considered as an identity matrix. Another useful representation is the following form: This can be derived using the formula for the geometric series. Alternative notation Using the Iverson bracket: Often, a single-argument notation is used, which is equivalent to setting : In linear algebra, it can be thought of as a tensor, and is written . Sometimes the Kronecker delta is called the substitution tensor. Digital signal processing In the study of digital signal processing (DSP), the unit sample function represents a special case of a 2-dimensional Kronecker delta function where the Kronecker indices include the number zero, and where one of the indices is zero. In this case: Or more generally where: However, this is only a special case. In tensor calculus, it is more common to number basis vectors in a particular dimension starting with index 1, rather than index 0. In this case, the relation does not exist, and in fact, the Kronecker delta function and the unit sample function are d" https://en.wikipedia.org/wiki/Elementary%20proof,"In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking ""higher"" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques. While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved. Prime number theorem The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential ""depth"" of the result ruled out elementary proofs: However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem. A possible formalization of the notion of ""elementary"" in connection to a proof of a number-theoretical result is the restriction that the proof can be carried out in Peano arithmetic. Also in that sense, these proofs are elementary. Friedman's conjecture Harvey Friedman conjectured, ""Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementar" https://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang%20transform,"The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool. The Hilbert–Huang transform (HHT), a NASA designated name, was proposed by Norden E. Huang et al. (1996, 1998, 1999, 2003, 2012). It is the result of the empirical mode decomposition (EMD) and the Hilbert spectral analysis (HSA). The HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMF) with a trend, and applies the HSA method to the IMFs to obtain instantaneous frequency data. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzing nonstationary and nonlinear time series data. Definition Empirical mode decomposition The fundamental part of the HHT is the empirical mode decomposition (EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such as Fourier transform and Wavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF). Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise). EMD based smoothing algorithms have been widely used in seismic data processi" https://en.wikipedia.org/wiki/Peer%20group%20%28computer%20networking%29,"In computer networking, a peer group is a group of functional units in the same layer (see e.g. OSI model) of a network, by analogy with peer group. See also peer-to-peer (P2P) networking which is a specific type of networking relying on basically equal end hosts rather than on a hierarchy of devices. Computer networking" https://en.wikipedia.org/wiki/Probabilistically%20checkable%20proof,"In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP. Definition Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (x ∈ L, the proof is a string ∈ Σ*). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or x ∈ L) and decides whether to accept the" https://en.wikipedia.org/wiki/Western%20Digital%20FD1771,"The FD1771, sometimes WD1771, is the first in a line of floppy disk controllers produced by Western Digital. It uses single density FM encoding introduced in the IBM 3740. Later models in the series added support for MFM encoding and increasingly added onboard circuitry that formerly had to be implemented in external components. Originally packaged as 40-pin dual in-line package (DIP) format, later models moved to a 28-pin format that further lowered implementation costs. Derivatives The FD1771 was succeeded by many derivatives that were mostly software-compatible: The FD1781 was designed for double density, but required external modulation and demodulation circuitry, so it could support MFM, M2FM, GCR or other double-density encodings. The FD1791-FD1797 series added internal support for double density (MFM) modulation, compatible with the IBM System/34 disk format. They required an external data separator. The WD1761-WD1767 series were versions of the FD179x series rated for a maximum clock frequency of 1 MHz, resulting in a data rate limit of 125 kbit/s for single density and 250 kbit/s for double density, thus preventing them from being used for 8-in (200 mm) floppy drives or the later ""high-density"" or 90 mm floppy drives. These were sold at a lower price point and widely used in home computer floppy drives. The WD2791-WD2797 series added an internal data separator using an analog phase-locked loop, with some external passive components required for the VCO. They took a 1 MHz or 2 MHz clock and were intended for and drives. The WD1770, WD1772, and WD1773 added an internal digital data separator and write precompensator, eliminating the need for external passive components but raising the clock rate requirement to 8 MHz. They supported double density, despite the apparent regression of the part number, and were packaged in 28-pin DIP packages. The WD1772PH02-02 was a version of the chip that Atari fitted to the Atari STE which supported high density (500 " https://en.wikipedia.org/wiki/Audio%20leveler,"An audio leveler performs an audio process similar to compression, which is used to reduce the dynamic range of a signal, so that the quietest portion of the signal is loud enough to hear and the loudest portion is not too loud. Levelers work especially well with vocals, as there are huge dynamic differences in the human voice and levelers work in such a way as to sound very natural, letting the character of the sound change with the different levels but still maintaining a predictable and usable dynamic range. A leveler is different from a compressor in that the ratio and threshold are controlled with a single control. External links TLA-100 Tube Levelling Amplifier by Summit Audio Signal processing" https://en.wikipedia.org/wiki/Placement%20%28electronic%20design%20automation%29,"Placement is an essential step in electronic design automation — the portion of the physical design flow that assigns exact locations for various circuit components within the chip's core area. An inferior placement assignment will not only affect the chip's performance but might also make it non-manufacturable by producing excessive wire-length, which is beyond available routing resources. Consequently, a placer must perform the assignment while optimizing a number of objectives to ensure that a circuit meets its performance demands. Together, the placement and routing steps of IC design are known as place and route. A placer takes a given synthesized circuit netlist together with a technology library and produces a valid placement layout. The layout is optimized according to the aforementioned objectives and ready for cell resizing and buffering — a step essential for timing and signal integrity satisfaction. Clock-tree synthesis and Routing follow, completing the physical design process. In many cases, parts of, or the entire, physical design flow are iterated a number of times until design closure is achieved. In the case of application-specific integrated circuits, or ASICs, the chip's core layout area comprises a number of fixed height rows, with either some or no space between them. Each row consists of a number of sites which can be occupied by the circuit components. A free site is a site that is not occupied by any component. Circuit components are either standard cells, macro blocks, or I/O pads. Standard cells have a fixed height equal to a row's height, but have variable widths. The width of a cell is an integral number of sites. On the other hand, blocks are typically larger than cells and have variable heights that can stretch a multiple number of rows. Some blocks can have preassigned locations — say from a previous floorplanning process — which limit the placer's task to assigning locations for just the cells. In this case, the blocks are typicall" https://en.wikipedia.org/wiki/Bhabha%20scattering,"In quantum electrodynamics, Bhabha scattering is the electron-positron scattering process: There are two leading-order Feynman diagrams contributing to this interaction: an annihilation process and a scattering process. Bhabha scattering is named after the Indian physicist Homi J. Bhabha. The Bhabha scattering rate is used as a luminosity monitor in electron-positron colliders. Differential cross section To leading order, the spin-averaged differential cross section for this process is where s,t, and u are the Mandelstam variables, is the fine-structure constant, and is the scattering angle. This cross section is calculated neglecting the electron mass relative to the collision energy and including only the contribution from photon exchange. This is a valid approximation at collision energies small compared to the mass scale of the Z boson, about 91 GeV; at higher energies the contribution from Z boson exchange also becomes important. Mandelstam variables In this article, the Mandelstam variables are defined by {| |align=""right""| |align=""right""| |align=""right""| |align=""right""| |align=""right""| |rowspan=""3""|         |- |align=""right""| |align=""right""| |align=""right""| | | |- |align=""right""| |align=""right""| |align=""right""| | | |} where the approximations are for the high-energy (relativistic) limit. Deriving unpolarized cross section Matrix elements Both the scattering and annihilation diagrams contribute to the transition matrix element. By letting k and k' represent the four-momentum of the positron, while letting p and p' represent the four-momentum of the electron, and by using Feynman rules one can show the following diagrams give these matrix elements: {| border=""0"" cellpadding=""5"" cellspacing=""0"" | |align=""center"" | |align=""center"" | |Where we use: are the Gamma matrices, are the four-component spinors for fermions, while are the four-component spinors for anti-fermions (see Four spinors). |- | |align=""center"" | (scattering) |align=""center"" | (anni" https://en.wikipedia.org/wiki/Dottie%20number,"In mathematics, the Dottie number is a constant that is the unique real root of the equation , where the argument of is in radians. The decimal expansion of the Dottie number is . Since is decreasing and its derivative is non-zero at , it only crosses zero at one point. This implies that the equation has only one real solution. It is the single real-valued fixed point of the cosine function and is a nontrivial example of a universal attracting fixed point. It is also a transcendental number because of the Lindemann-Weierstrass theorem. The generalised case for a complex variable has infinitely many roots, but unlike the Dottie number, they are not attracting fixed points. Using the Taylor series of the inverse of at (or equivalently, the Lagrange inversion theorem), the Dottie number can be expressed as the infinite series where each is a rational number defined for odd n as The name of the constant originates from a professor of French named Dottie who observed the number by repeatedly pressing the cosine button on her calculator. If a calculator is set to take angles in degrees, the sequence of numbers will instead converge to , the root of . Closed form The Dottie number can be expressed as where is the inverse regularized Beta function. This value can be obtained using Kepler's equation. In Microsoft Excel and LibreOffice Calc spreadsheets, the Dottie number can be expressed in closed form as . In the Mathematica computer algebra system, the Dottie number is . Integral representations Dottie number can be represented as . Another integral representation: Notes" https://en.wikipedia.org/wiki/Probabilistic%20method,"In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. Introduction If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma. Two examples due to Erdős Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamilton" https://en.wikipedia.org/wiki/Motorola%2068881,"The Motorola 68881 and Motorola 68882 are floating-point units (FPUs) used in some computer systems in conjunction with Motorola's 32-bit 68020 or 68030 microprocessors. These coprocessors are external chips, designed before floating point math became standard on CPUs. The Motorola 68881 was introduced in 1984. The 68882 is a higher performance version produced later. Overview The 68020 and 68030 CPUs were designed with the separate 68881 chip in mind. Their instruction sets reserved the ""F-line"" instructions – that is, all opcodes beginning with the hexadecimal digit ""F"" could either be forwarded to an external coprocessor or be used as ""traps"" which would throw an exception, handing control to the computer's operating system. If an FPU is not present in the system, the OS would then either call an FPU emulator to execute the instruction's equivalent using 68020 integer-based software code, return an error to the program, terminate the program, or crash and require a reboot. Architecture The 68881 has eight 80-bit data registers (a 64-bit mantissa plus a sign bit, and a 15-bit signed exponent). It allows seven different modes of numeric representation, including single-precision floating point, double-precision floating point, extended-precision floating point, integers as 8-, 16- and 32-bit quantities and a floating-point Binary-coded decimal format. The binary floating point formats are as defined by the IEEE 754 floating-point standard. It was designed specifically for floating-point math and is not a general-purpose CPU. For example, when an instruction requires any address calculations, the main CPU handles them before the 68881 takes control. The CPU/FPU pair are designed such that both can run at the same time. When the CPU encounters a 68881 instruction, it hands the FPU all operands needed for that instruction, and then the FPU releases the CPU to go on and execute the next instruction. 68882 The 68882 is an improved version of the 68881, with b" https://en.wikipedia.org/wiki/Nonlinear%20system,"In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it. As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part" https://en.wikipedia.org/wiki/LGM-30%20Minuteman,"The LGM-30 Minuteman is an American land-based intercontinental ballistic missile (ICBM) in service with the Air Force Global Strike Command. , the LGM-30G Minuteman III version is the only land-based ICBM in service in the United States and represents the land leg of the U.S. nuclear triad, along with the Trident II submarine-launched ballistic missile (SLBM) and nuclear weapons carried by long-range strategic bombers. Development of the Minuteman began in the mid-1950s when basic research indicated that a solid-fuel rocket motor could stand ready to launch for long periods of time, in contrast to liquid-fueled rockets that required fueling before launch and so might be destroyed in a surprise attack. The missile was named for the colonial minutemen of the American Revolutionary War, who could be ready to fight on short notice. The Minuteman entered service in 1962 as a deterrence weapon that could hit Soviet cities with a second strike and countervalue counterattack if the U.S. was attacked. However, the development of the United States Navy (USN) UGM-27 Polaris, which addressed the same role, allowed the Air Force to modify the Minuteman, boosting its accuracy enough to attack hardened military targets, including Soviet missile silos. The Minuteman II entered service in 1965 with a host of upgrades to improve its accuracy and survivability in the face of an anti-ballistic missile (ABM) system the Soviets were known to be developing. In 1970, the Minuteman III became the first deployed ICBM with multiple independently targetable reentry vehicles (MIRV): three smaller warheads that improved the missile's ability to strike targets defended by ABMs. They were initially armed with the W62 warhead with a yield of 170 kilotons. By the 1970s, 1,000 Minuteman missiles were deployed. This force has shrunk to 400 Minuteman III missiles , deployed in missile silos around Malmstrom AFB, Montana; Minot AFB, North Dakota; and Francis E. Warren AFB, Wyoming. The Minuteman III" https://en.wikipedia.org/wiki/Network%20Protocol%20Virtualization,"Network Protocol Virtualization or Network Protocol Stack Virtualization is a concept of providing network connections as a service, without concerning application developer to decide the exact communication stack composition. Concept Network Protocol Virtualization (NPV) was firstly proposed by Heuschkel et al. in 2015 as a rough sketch as part of a transition concept for network protocol stacks. The concept evolved and was published in a deployable state in 2018. The key idea is to decouple applications from their communication stacks. Today the socket API requires application developer to compose the communication stack by hand by choosing between IPv4/IPv6 and UDP/TCP. NPV proposes the network protocol stack should be tailored to the observed network environment (e.g. link layer technology, or current network performance). Thus, the network stack should not be composed at development time, but at runtime and needs the possibility to be adapted if needed. Additionally the decoupling relaxes the chains of the ISO OSI network layer model, and thus enables alternative concepts of communication stacks. Heuschkel et al. proposes the concept of Application layer middleboxes as example to add additional layers to the communication stack to enrich the communication with useful services (e.g. HTTP optimizations) The Figure illustrates the dataflow. Applications interface to the NPV software through some kind of API. Heuschkel et al. proposed socket API equivalent replacements but envision more sophisticated interfaces for future applications. The application payload is assigned by a scheduler to one (of potentially many) communication stack to get processed to network packets, that get sent using networking hardware. A management component decide how communication stacks get composed and how the scheduling scheme should be. To support decisions a management interface is provided to integrate the management system in software-defined networking contexts. NPV has bee" https://en.wikipedia.org/wiki/Information-centric%20networking%20caching%20policies,"In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructionsor algorithmsthat a computer program or a hardware-maintained structure can follow in order to manage a cache of information stored on the computer. When the cache is full, the algorithm must choose which items to discard to make room for the new ones. Due to the inherent caching capability of nodes in Information-centric networking ICN, the ICN can be viewed as a loosely connect network of caches, which has unique requirements of Caching policies. Unlike proxy servers, in Information-centric networking the cache is a network level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes further impose different kind of requirements on the content eviction policies. In particular, eviction policies for Information-centric networking should be fast and lightweight. Various cache replication and eviction schemes for different Information-centric networking architectures and applications are proposed. Policies Time aware least recently used (TLRU) The Time aware Least Recently Used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid life time. The algorithm is suitable in network cache applications, such as information-centric networking (ICN), content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: TTU (Time to Use). TTU is a time stamp of a content/page which stipulates the usability time for the content based on the locality of the content and the content publisher announcement. Owing to this locality based time stamp, TTU provides more control to the local administrator to regulate in network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU" https://en.wikipedia.org/wiki/Software%20construction,"Software construction is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing. Software construction fundamentals Minimizing complexity The need to reduce complexity is mainly driven by limited ability of most people to hold complex structures and information in their working memories. Reduced complexity is achieved through emphasizing the creation of code that is simple and readable rather than clever. Minimizing complexity is accomplished through making use of standards, and through numerous specific techniques in coding. It is also supported by the construction-focused quality techniques. Anticipating change Anticipating change helps software engineers build extensible software, which means they can enhance a software product without disrupting the underlying structure. Research over 25 years showed that the cost of rework can be 10 to 100 times (5 to 10 times for smaller projects) more expensive than getting the requirements right the first time. Given that 25% of the requirements change during development on average project, the need to reduce the cost of rework elucidates the need for anticipating change. Constructing for verification Constructing for verification means building software in such a way that faults can be ferreted out readily by the software engineers writing the software, as well as during independent testing and operational activities. Specific techniques that support constructing for verification include following coding standards to support code reviews, unit testing, organizing code to support automated testing, and restricted use of complex or hard-to-understand language structures, among others. Reuse Systematic reuse can enable significant software productivity, quality, an" https://en.wikipedia.org/wiki/Banana%20peel,"A banana peel, called banana skin in British English, is the outer covering of the banana fruit. Banana peels are used as food for animals, an ingredient in cooking, in water purification, for manufacturing of several biochemical products as well as for jokes and comical situations. There are several methods to remove a peel from a banana. Use Bananas are a popular fruit consumed worldwide with a yearly production of over 165 million tonnes in 2011. Once the peel is removed, the fruit can be eaten raw or cooked and the peel is generally discarded. Because of this removal of the banana peel, a significant amount of organic waste is generated. Banana peels are sometimes used as feedstock for cattle, goats, pigs, monkeys, poultry, rabbits, fish, zebras and several other species, typically on small farms in regions where bananas are grown. There are some concerns over the impact of tannins contained in the peels on animals that consume them. The nutritional value of banana peel depends on the stage of maturity and the cultivar; for example plantain peels contain less fibre than dessert banana peels, and lignin content increases with ripening (from 7 to 15% dry matter). On average, banana peels contain 6-9% dry matter of protein and 20-30% fibre (measured as NDF). Green plantain peels contain 40% starch that is transformed into sugars after ripening. Green banana peels contain much less starch (about 15%) when green than plantain peels, while ripe banana peels contain up to 30% free sugars. Banana peels are also used for water purification, to produce ethanol, cellulase, laccase, as fertilizer and in composting. Culinary use Cooking with banana peel is common place in Southeast Asian, Indian and Venezuelan cuisine where the peel of bananas and plantains is used in recipes. In April 2019, a vegan pulled pork recipe using banana peel by food blogger Melissa Copeland aka The Stingy Vegan went viral. In 2020, The Great British Bake Off winner Nadiya Hussain revealed " https://en.wikipedia.org/wiki/Network%20equipment%20provider,"Network equipment providers (NEPs) – sometimes called telecommunications equipment manufacturers (TEMs) – sell products and services to communication service providers such as fixed or mobile operators as well as to enterprise customers. NEP technology allows for calls on mobile phones, Internet surfing, joining a conference calls, or watching video on demand through IPTV (internet protocol TV). The history of the NEPs goes back to the mid-19th century when the first telegraph networks were set up. Some of these players still exist today. Telecommunications equipment manufacturers The terminology of the traditional telecommunications industry has rapidly evolved during the Information Age. The terms ""Network"" and ""Telecoms"" are often used interchangeably. The same is true for ""provider"" and ""manufacturer"". Historically, NEPs sell integrated hardware/software systems to carriers such as NTT-DoCoMo, ATT, Sprint, and so on. They purchase hardware from TEMs (telecom equipment manufacturers), such as Vertiv, Kontron, and NEC, to name a few. TEMs are responsible for manufacturing the hardware, devices, and equipment the telecommunications industry requires. The distinction between NEP and TEM is sometimes blurred, because all the following phrases may imply NEP: Telecommunications equipment provider Telecommunications equipment industry Telecommunications equipment company Telecommunications equipment manufacturer (TEM) Telecommunications equipment technology Network equipment provider (NEP) Network equipment industry Network equipment companies Network equipment manufacturer Network equipment technology Services This is a highly competitive industry that includes telephone, cable, and data services segments. Products and services include: Mobile networks like GSM (Global System for Mobile Communication), Enhanced Data Rates for GSM Evolution (EDGE) or GPRS (General Packet Radio Service). Networks of this kind are typically also known as 2G and 2.5G net" https://en.wikipedia.org/wiki/Ansatz,"In physics and mathematics, an ansatz (; , meaning: ""initial placement of a tool at a work piece"", plural ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results. Use An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a ""trial answer"" and an important technique in solving differential equations). After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find. It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available. Examples Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters. Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics. Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equatio" https://en.wikipedia.org/wiki/Parts-per%20notation,"In science and engineering, the parts-per notation is a set of pseudo-units to describe small values of miscellaneous dimensionless quantities, e.g. mole fraction or mass fraction. Since these fractions are quantity-per-quantity measures, they are pure numbers with no associated units of measurement. Commonly used are parts-per-million (ppm, ), parts-per-billion (ppb, ), parts-per-trillion (ppt, ) and parts-per-quadrillion (ppq, ). This notation is not part of the International System of Units (SI) system and its meaning is ambiguous. Applications Parts-per notation is often used describing dilute solutions in chemistry, for instance, the relative abundance of dissolved minerals or pollutants in water. The quantity ""1 ppm"" can be used for a mass fraction if a water-borne pollutant is present at one-millionth of a gram per gram of sample solution. When working with aqueous solutions, it is common to assume that the density of water is 1.00 g/mL. Therefore, it is common to equate 1 kilogram of water with 1 L of water. Consequently, 1 ppm corresponds to 1 mg/L and 1 ppb corresponds to 1 μg/L. Similarly, parts-per notation is used also in physics and engineering to express the value of various proportional phenomena. For instance, a special metal alloy might expand 1.2 micrometers per meter of length for every degree Celsius and this would be expressed as Parts-per notation is also employed to denote the change, stability, or uncertainty in measurements. For instance, the accuracy of land-survey distance measurements when using a laser rangefinder might be 1 millimeter per kilometer of distance; this could be expressed as ""Accuracy = 1 ppm."" Parts-per notations are all dimensionless quantities: in mathematical expressions, the units of measurement always cancel. In fractions like ""2 nanometers per meter"" so the quotients are pure-number coefficients with positive values less than or equal to 1. When parts-per notations, including the percent symbol (%), are used i" https://en.wikipedia.org/wiki/Beat%20detection,"In signal analysis, beat detection is using computer software or computer hardware to detect the beat of a musical score. There are many methods available and beat detection is always a tradeoff between accuracy and speed. Beat detectors are common in music visualization software such as some media player plugins. The algorithms used may utilize simple statistical models based on sound energy or may involve sophisticated comb filter networks or other means. They may be fast enough to run in real time or may be so slow as to only be able to analyze short sections of songs. See also Pitch detection External links Beat This > Beat Detection Algorithm Audio Analysis using the Discrete Wavelet Transform Signal processing" https://en.wikipedia.org/wiki/PHY-Level%20Collision%20Avoidance,"PHY-Level Collision Avoidance (PLCA) is a component of the Ethernet reconciliation sublayer (between the PHY and the MAC) defined within IEEE 802.3 clause 148. The purpose of PLCA is to avoid the shared medium collisions and associated retransmission overhead. PLCA is used in 802.3cg (10BASE-T1), which focuses on bringing ethernet connectivity to short-haul embedded internet of things and low throughput, noise-tolerant, industrial deployment use cases. In order for a multidrop 10BASE-T1S standard to successfully compete with CAN XL, some kind of arbitration was necessary. The linear arbitration scheme of PLCA somewhat resembles the one of the Byteflight, but PLCA was designed from scratch to accommodate the existing shared medium Ethernet MACs with their busy sensing mechanisms. Operation Under a PLCA scheme all nodes are assigned unique sequential numbers (IDs) in the range from 0 to N. Zero ID corresponds to a special ""master"" node that during the idle intervals transmits the synchronization beacon (a special heartbeat frame). After the beacon (within PLCA cycle) each node gets its transmission opportunity (TO). Each opportunity interval is very short (typically 20 bits), so overhead for the nodes that do not have anything to transmit is low. If the PLCA circuitry discovers that the node's TO cannot be used (the other node with a lower ID have started its transmission and the media is busy at the beginning of the TO for this node), it asserts the ""local collision"" input of the MAC thus delaying the transmission. The condition is cleared once the node gets its TO. A standard MAC reacts to the local collision with a backoff, however, since this is the first and only backoff for this frame, the backoff interval is equal to the smallest possible frame - and the backoff timer will definitely expire by the time the TO is granted, so there is no additional loss of performance. See also Internet of things (IOT)" https://en.wikipedia.org/wiki/Mathematical%20instrument,"A mathematical instrument is a tool or device used in the study or practice of mathematics. In geometry, construction of various proofs was done using only a compass and straightedge; arguments in these proofs relied only on idealized properties of these instruments and literal construction was regarded as only an approximation. In applied mathematics, mathematical instruments were used for measuring angles and distances, in astronomy, navigation, surveying and in the measurement of time. Overview Instruments such as the astrolabe, the quadrant, and others were used to measure and accurately record the relative positions and movements of planets and other celestial objects. The sextant and other related instruments were essential for navigation at sea. Most instruments are used within the field of geometry, including the ruler, dividers, protractor, set square, compass, ellipsograph, T-square and opisometer. Others are used in arithmetic (for example the abacus, slide rule and calculator) or in algebra (the integraph). In astronomy, many have said the pyramids (along with Stonehenge) were actually instruments used for tracking the stars over long periods or for the annual planting seasons. In schools The Oxford Set of Mathematical Instruments is a set of instruments used by generations of school children in the United Kingdom and around the world in mathematics and geometry lessons. It includes two set squares, a 180° protractor, a 15 cm ruler, a metal compass, a 9 cm pencil, a pencil sharpener, an eraser and a 10mm stencil. See also The Construction and Principal Uses of Mathematical Instruments Dividing engine Measuring instrument Planimeter Integraph" https://en.wikipedia.org/wiki/Federal%20Networking%20Council,"Informally established in the early 1990s, the Federal Networking Council (FNC) was later chartered by the US National Science and Technology Council's Committee on Computing, Information and Communications (CCIC) to continue to act as a forum for networking collaborations among US federal agencies to meet their research, education, and operational mission goals and to bridge the gap between the advanced networking technologies being developed by research FNC agencies and the ultimate acquisition of mature version of these technologies from the commercial sector. The FNC consisted of a group made up of representatives from the United States Department of Defense (DoD), the National Science Foundation, the Department of Energy, and the National Aeronautics and Space Administration (NASA), among others. By October 1997, the FNC advisory committee was de-chartered and many of the FNC activities were transferred to the Large Scale Networking group of the Computing, Information, and Communications (CIC) R&D subcommittee of the Networking and Information Technology Research and Development program, or the Applications Council. On October 24, 1995, the Federal Networking Council passed a resolution defining the term Internet: Resolution: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term ``Internet. ``Internet'' refers to the global information system that - (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.' Some notable members of the council advisory committee i" https://en.wikipedia.org/wiki/Edge%20states,"In physics, Edge states are the topologically protected electronic states that exist at the boundary of the material and cannot be removed without breaking the system's symmetry. Background In solid-state physics, quantum mechanics, materials science, physical chemistry and other several disciplines we study the electronic band structure of materials primarily based on the extent of the band gap, the gap between highest occupied valance bands and lowest unoccupied conduction bands. We can represent the possible energy level of the material that provides the discrete energy values of all possible states in the energy profile diagram by solving the Hamiltonian of the system. This solution provides the corresponding energy eigenvalues and eigenvectors. Based on the energy eigenvalues, conduction band are the high energy states (E>0) while valance bands are the low energy states (E<0). In some materials, for example, in graphene and zigzag graphene quantum dot, there exists the energy states having energy eigenvalues exactly equal to zero (E=0) besides the conduction and valance bands. These states are called edge states which modifies the electronic and optical properties of the materials significantly." https://en.wikipedia.org/wiki/Test%20data,"Test data plays a crucial role in software development by providing inputs that are used to verify the correctness, performance, and reliability of software systems. Test data encompasses various types, such as positive and negative scenarios, edge cases, and realistic user scenarios, and it aims to exercise different aspects of the software to uncover bugs and validate its behavior. By designing and executing test cases with appropriate test data, developers can identify and rectify defects, improve the quality of the software, and ensure it meets the specified requirements. Moreover, test data can also be used for regression testing to validate that new code changes or enhancements do not introduce any unintended side effects or break existing functionalities. Overall, the effective utilization of test data in software development significantly contributes to the production of reliable and robust software systems. Background Some data may be used in a confirmatory way, typically to verify that a given set of inputs to a given function produces some expected result. Other data may be used in order to challenge the ability of the program to respond to unusual, extreme, exceptional, or unexpected input. Test data may be produced in a focused or systematic way (as is typically the case in domain testing), or by using other, less-focused approaches (as is typically the case in high-volume randomized automated tests). Test data may be produced by the tester, or by a program or function that aids the tester. Test data may be recorded for reuse or used only once. Test data can be created manually, by using data generation tools (often based on randomness), or be retrieved from an existing production environment. The data set can consist of synthetic (fake) data, but preferably it consists of representative (real) data. Limitations Due to privacy rules and regulations like GDPR, PCI and HIPAA it is not allowed to use privacy sensitive personal data for testing. But an" https://en.wikipedia.org/wiki/Electroadhesion,"Electroadhesion is the electrostatic effect of astriction between two surfaces subjected to an electrical field. Applications include the retention of paper on plotter surfaces, astrictive robotic prehension (electrostatic grippers) etc. Clamping pressures in the range of 0.5 to 1.5 N/cm2 (0.8 to 2.3 psi) have been claimed. An electroadhesive pad consists of conductive electrodes placed upon a polymer substrate. When alternate positive and negative charges are induced on adjacent electrodes, the resulting electric field sets up opposite charges on the surface that the pad touches, and thus causes electrostatic adhesion between the electrodes and the induced charges in the touched surface material. Electroadhesion can be loosely divided into two basic forms: that which concerns the prehension of electrically conducting materials where the general laws of capacitance hold (D = E ε) and that used with electrically insulating subjects where the more advanced theory of electrostatics (D = E ε + P) applies." https://en.wikipedia.org/wiki/Bond-out%20processor,"A bond-out processor is an emulation processor that takes the place of the microcontroller or microprocessor in the target board while an application is being developed and/or debugged. Bond-out processors have internal signals and bus brought out to external pins. The term bond-out derives from connecting (or bonding) the emulation circuitry to these external pins. These devices are designed to be used within an in-circuit emulator and are not typically used in any other kind of system. Bond-out pins were marked as no-connects in the first devices produced by Intel, and were usually not connected to anything on the ordinary production silicon. Later bond-out versions of the microprocessor were produced in a bigger package to provide more signals and functionality. Bond-out processors provides capabilities far beyond those of a simple ROM monitor. A ROM monitor is a firmware program that runs instead of the application code and provides a connection to a host computer to carry out debugging functions. In general the ROM monitor uses part of the processor resources and shares the memory with the user code. Bond-out processors can handle complex breakpoints (even in ROM), real-time traces of processor activity, and no use of target resources. But this extra functionality comes at a high cost, as bond-outs have to be produced for in-circuit emulators only. Therefore, sometimes solutions similar to bond-outs are implemented with an ASIC or FPGA or a faster RISC processor that imitates the core processor code execution and peripherals." https://en.wikipedia.org/wiki/Opal%20Storage%20Specification,"The Opal Storage Specification is a set of specifications for features of data storage devices (such as hard disk drives and solid state drives) that enhance their security. For example, it defines a way of encrypting the stored data so that an unauthorized person who gains possession of the device cannot see the data. That is, it is a specification for self-encrypting drives (SED). The specification is published by the Trusted Computing Group Storage Workgroup. Overview The Opal SSC (Security Subsystem Class) is an implementation profile for Storage Devices built to: Protect the confidentiality of stored user data against unauthorized access once it leaves the owner's control (involving a power cycle and subsequent deauthentication). Enable interoperability between multiple SD vendors. Functions The Opal SSC encompasses these functions: Security provider support Interface communication protocol Cryptographic features Authentication Table management Access control and personalization Issuance SSC discovery Features Security Protocol 1 support Security Protocol 2 support Communications Protocol stack reset commands Security Radboud University researchers indicated in November 2018 that some hardware-encrypted SSDs, including some Opal implementations, had security vulnerabilities. Implementers of SSC Device companies Hitachi Intel Corporation Kingston Technology Lenovo Micron Technology Samsung SanDisk Seagate Technology as ""Seagate Secure"" Toshiba Storage controller companies Marvell Avago/LSI SandForce flash controllers Software companies Absolute Software Check Point Software Technologies Dell Data Protection Cryptomill McAfee Secude Softex Incorporated Sophos Symantec (Symantec supports OPAL drives, but does not support hardware-based encryption.) Trend Micro WinMagic OpalLock(OpalLock support Self-Encrypt-Drive capable SSD and HDD. Develop by Fidelity Height LLC) Computer OEMs Dell HP Lenovo Fujitsu " https://en.wikipedia.org/wiki/IOIO,"IOIO (pronounced yo-yo) is a series of open source PIC microcontroller-based boards that allow Android mobile applications to interact with external electronics. The device was invented by Ytai Ben-Tsvi in 2011, and was first manufactured by SparkFun Electronics. The name ""IOIO"" is inspired by the function of the device, which enables applications to receive external input (""I"") and produce external output (""O""). Features The IOIO board contains a single PIC MCU that acts as a USB host/USB slave and communicates with an Android app running on a connected Android device. The board provides connectivity via USB, USB-OTG or Bluetooth, and is controllable from within an Android application using the Java API. In addition to basic digital input/output and analog input, the IOIO library also handles PWM, I2C, SPI, UART, Input capture, Capacitive sensing and advanced motor control. To connect to older Android devices that use USB 2.0 in slave mode, newer IOIO models use USB On-The-Go to act as a host for such devices. Some models also support the Google Open Accessory USB protocol. The IOIO motor control API can drive up to 9 motors and any number of binary actuators in synchronization and cycle-accurate precision. Developers may send a sequence of high-level commands to the IOIO, which performs the low-level waveform generation on-chip. The IOIO firmware supports 3 different kinds of motors; stepper motors, DC motors and servo motors. Device firmware may be updated on-site by the user. For first-generation devices updating is performed using an Android device and the IOIO Manager application available on Google Play. Second-generation IOIO-OTG devices must be updated using a desktop computer running the IOIODude application. The IOIO supports both computers and Android devices as first-class hosts, and provides the exact API on both types of devices. First-generation devices can only communicate with PCs over Bluetooth, while IOIO-OTG devices can use either Bluetooth" https://en.wikipedia.org/wiki/Sensor%20hub,"A sensor hub is a microcontroller unit/coprocessor/DSP set that helps to integrate data from different sensors and process them. This technology can help off-load these jobs from a product's main central processing unit, thus saving battery consumption and providing a performance improvement. Intel has the Intel Integrated Sensor Hub. Starting from Cherrytrail and Haswell, many Intel processors offers on package sensor hub. The Samsung Galaxy Note II is the first smart phone with a sensor hub, which was launched in 2012. Examples Some devices with Snapdragon 800 series chips, including HTC One (M8), Sony Xperia Z1, LG G2, etc., have a sensor hub, the Qualcomm Snapdragon Sensor Core, and all HiSilicon Kirin 920 devices have sensor hub embedded in the chipset with its successor Kirin 925 integrated an i3 chip with same function into it. Some other devices that are not using these chips but with a sensor hub integrated are listed below:" https://en.wikipedia.org/wiki/Fibre%20multi-object%20spectrograph,"Fibre multi-object spectrograph (FMOS) is facility instrument for the Subaru telescope on Mauna Kea in Hawaii. The instrument consists of a complex fibre-optic positioning system mounted at the prime focus of the telescope. Fibres are then fed to a pair of large spectrographs, each weighing nearly 3000 kg. The instrument will be used to look at the light from up to 400 stars or galaxies simultaneously over a field of view of 30 arcminutes (about the size of the full moon on the sky). The instrument will be used for a number of key programmes, including galaxy formation and evolution and dark energy via a measurement of the rate at which the universe is expanding. Design, construction, operation It is currently being built by a consortium of institutes led by Kyoto University and Oxford University with parts also being manufactured by the Rutherford Appleton Laboratory, Durham University and the Anglo-Australian Observatory. The instrument is scheduled for engineering first-light in late 2008. OH-suppression The spectrographs use a technique called OH-suppression to increase the sensitivity of the observations: The incoming light from the fibres is dispersed to a relatively high resolution and this spectrum forms an image on a pair of spherical mirrors which have been etched at the positions corresponding to the bright OH-lines. This spectrum is then re-imaged through a second diffraction grating to allow the full spectrum (without the OH lines) to be imaged onto a single infrared detector." https://en.wikipedia.org/wiki/Arcadia%20%28play%29,"Arcadia (1993), written by English playwright Tom Stoppard, explores the relationship between past and present, order and disorder, certainty and uncertainty. It has been praised by many critics as the finest play from ""one of the most significant contemporary playwrights"" in the English language. In 2006, the Royal Institution of Great Britain named it one of the best science-related works ever written. Synopsis In 1809, Thomasina Coverly, the daughter of the house, is a precocious teenager with ideas about mathematics, nature, and physics well ahead of her time. She studies with her tutor Septimus Hodge, a friend of Lord Byron (an unseen guest in the house). In the present, writer Hannah Jarvis and literature professor Bernard Nightingale converge on the house: she is investigating a hermit who once lived on the grounds; he is researching a mysterious chapter in the life of Byron. As their studies unfold – with the help of Valentine Coverly, a post-graduate student in mathematical biology – the truth about what happened in Thomasina's time is gradually revealed. Scene 1 (Act 1) The play opens on 10 April 1809, in a garden-front room of the house. Septimus Hodge is trying to distract 13-year-old Thomasina from her curiosity about ""carnal embrace"" by challenging her to prove Fermat's Last Theorem; he also wants to focus on reading the poem ""The Couch of Eros"" by Ezra Chater, who with his wife is a guest at the house. Thomasina starts asking why jam mixed in rice pudding can never be unstirred, which leads her to the topic of determinism and to a beginning theory about chaotic shapes in nature. This is interrupted by Chater himself, who is angry that his wife was caught in the aforementioned ""carnal embrace"" with Septimus; he has come to demand a duel. Septimus tries to defuse the situation by heaping praise on Chater's ""The Couch of Eros"". The tactic works, because Chater does not know it was Septimus who had savaged an earlier work of his, ""The Maid of Turkey""." https://en.wikipedia.org/wiki/Wave%20function%20collapse,"In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation, and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation. Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. Calculations of quantum decoherence show that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate. Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement. Mathematical description Before collapsing, the wave function may be any square-integrable function, and is therefore associated with the probability density of a quantum-mechanical system. This function is expressible as a linear combination of the eigenstates of any observable. Observables represent classical dynamical variables, and when one is measured by a classical observer, the wave function is projected onto a random eigenstate of that observable. The observer simultaneously measures the classical value of that observable to be the eigenvalue of the final state. Mathematical background The quantum state of a physical system is described by a wave function (in turn—an element of a projective Hilbert space). This can be expressed as a vector using" https://en.wikipedia.org/wiki/Duncan%27s%20taxonomy,"Duncan's taxonomy is a classification of computer architectures, proposed by Ralph Duncan in 1990. Duncan suggested modifications to Flynn's taxonomy to include pipelined vector processes. Taxonomy The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below. Synchronous architectures This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism. Pipelined vector processors Pipelined vector processors are characterized by pipelined functional units that accept a sequential stream of array or vector elements, such that different stages in a filled pipeline are processing different elements of the vector at a given time. Parallelism is provided both by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by chaining the output of one unit into another unit as input. Vector architectures that stream vector elements into functional units from special vector registers are termed register-to-register architectures, while those that feed functional units from special memory buffers are designated as memory-to-memory architectures. Early examples of register-to-register architectures from the 1960s and early 1970s include the Cray-1 and Fujitsu VP-200, while the Control Data Corporation STAR-100, CDC 205 and the Texas Instruments Advanced Scientific Computer are early examples of memory-to-memory vector architectures. The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (see NEC SX architecture). SIMD This scheme uses the SIMD (single instruction" https://en.wikipedia.org/wiki/Product%20of%20exponentials%20formula,"The product of exponentials (POE) method is a robotics convention for mapping the links of a spatial kinematic chain. It is an alternative to Denavit–Hartenberg parameterization. While the latter method uses the minimal number of parameters to represent joint motions, the former method has a number of advantages: uniform treatment of prismatic and revolute joints, definition of only two reference frames, and an easy geometric interpretation from the use of screw axes for each joint. The POE method was introduced by Roger W. Brockett in 1984. Method The following method is used to determine the product of exponentials for a kinematic chain, with the goal of parameterizing an affine transformation matrix between the base and tool frames in terms of the joint angles Define ""zero configuration"" The first step is to select a ""zero configuration"" where all the joint angles are defined as being zero. The 4x4 matrix describes the transformation from the base frame to the tool frame in this configuration. It is an affine transform consisting of the 3x3 rotation matrix R and the 1x3 translation vector p. The matrix is augmented to create a 4x4 square matrix. Calculate matrix exponential for each joint The following steps should be followed for each of N joints to produce an affine transform for each. Define the origin and axis of action For each joint of the kinematic chain, an origin point q and an axis of action are selected for the zero configuration, using the coordinate frame of the base. In the case of a prismatic joint, the axis of action v is the vector along which the joint extends; in the case of a revolute joint, the axis of action ω the vector normal to the rotation. Find twist for each joint A 1x6 twist vector is composed to describe the movement of each joint. For a revolute joint, For a prismatic joint, The resulting twist has two 1x3 vector components: Linear motion along an axis () and rotational motion along the same axis (ω). Calculate rotation m" https://en.wikipedia.org/wiki/Frubber,"Frubber (from ""flesh rubber"") is a patented elastic form of rubber used in robotics. The spongy elastomer has been used by Hanson Robotics for the face of its android robots, including Einstein 3 and Sophia." https://en.wikipedia.org/wiki/Direction%20of%20arrival,"In signal processing, direction of arrival (DOA) denotes the direction from which usually a propagating wave arrives at a point, where usually a set of sensors are located. These set of sensors forms what is called a sensor array. Often there is the associated technique of beamforming which is estimating the signal from a given direction. Various engineering problems addressed in the associated literature are: Find the direction relative to the array where the sound source is located Direction of different sound sources around you are also located by you using a process similar to those used by the algorithms in the literature Radio telescopes use these techniques to look at a certain location in the sky Recently beamforming has also been used in radio frequency (RF) applications such as wireless communication. Compared with the spatial diversity techniques, beamforming is preferred in terms of complexity. On the other hand, beamforming in general has much lower data rates. In multiple access channels (code-division multiple access (CDMA), frequency-division multiple access (FDMA), time-division multiple access (TDMA)), beamforming is necessary and sufficient Various techniques for calculating the direction of arrival, such as angle of arrival (AoA), time difference of arrival (TDOA), frequency difference of arrival (FDOA), or other similar associated techniques. Limitations on the accuracy of estimation of direction of arrival signals in digital antenna arrays are associated with jitter ADC and DAC. Advanced sophisticated techniques perform joint direction of arrival and time of arrival (ToA) estimation to allow a more accurate localization of a node. This also has the merit of localizing more targets with less antenna resources. Indeed, it is well-known in the array processing community that, generally speaking, one can resolve targets via antennas. When JADE (joint angle and delay) estimation is employed, one can go beyond this limit. Typical DOA estimation " https://en.wikipedia.org/wiki/Linde%E2%80%93Buzo%E2%80%93Gray%20algorithm,"The Linde–Buzo–Gray algorithm (introduced by Yoseph Linde, Andrés Buzo and Robert M. Gray in 1980) is a vector quantization algorithm to derive a good codebook. It is similar to the k-means method in data clustering. The algorithm At each iteration, each vector is split into two new vectors. A initial state: centroid of the training sequence; B initial estimation #1: code book of size 2; C final estimation after LGA: Optimal code book with 2 vectors; D initial estimation #2: code book of size 4; E final estimation after LGA: Optimal code book with 4 vectors; The final two code vectors are splitted into four and the process is repeated until the desired number of code vector is obtained." https://en.wikipedia.org/wiki/Fast%20folding%20algorithm,"In signal processing, the fast folding algorithm (Staelin, 1969) is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously. The FFA is best known for its use in the detection of pulsars, as popularised by SETI@home and Astropulse. It was also used by the Breakthrough Listen Initiative during their 2023 Investigation for Periodic Spectral Signals campaign. See also Pulsar" https://en.wikipedia.org/wiki/Classical%20limit,"The classical limit or correspondence limit is the ability of a physical theory to approximate or ""recover"" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior. Quantum theory A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of the Planck constant normalized by the action of these systems becomes very small. Often, this is approached through ""quasi-classical"" techniques (cf. WKB approximation). More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than the reduced Planck constant , so the ""deformation parameter"" / can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction. In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with  = 2 Hz,  = 10 g, and maximum amplitude 0 = 10 cm, has  = , so that  ≃ 1030. Further see" https://en.wikipedia.org/wiki/Field%20cancerization,"Field cancerization or field effect (also termed field change, field change cancerization, field carcinogenesis, cancer field effect or premalignant field defect) is a biological process in which large areas of cells at a tissue surface or within an organ are affected by carcinogenic alterations. The process arises from exposure to an injurious environment, often over a lengthy period. How it arises The initial step in field cancerization is associated with various molecular lesions such as acquired genetic mutations and epigenetic changes, occurring over a widespread, multi-focal ""field"". These initial molecular changes may subsequently progress to cytologically recognizable premalignant foci of dysplasia, and eventually to carcinoma in situ (CIS) or cancer. The image of a longitudinally opened colon resection on this page shows an area of a colon resection that likely has a field cancerization or field defect. It has one cancer and four premalignant polyps. Field cancerization can occur in any tissue. Prominent examples of field cancerization include premalignant field defects in head and neck cancer, lung cancer, colorectal cancer, Barrett's esophagus, skin, breast ducts and bladder. Field cancerization has implications for cancer surveillance and treatment. Despite adequate resection and being histologically normal, the remaining locoregional tissue has an increased risk for developing multiple independent cancers, either synchronously or metachronously. Common early carcinogenic alterations A common carcinogenic alteration, found in many cancers and in their adjacent field defects from which the cancers likely arose, is reduced expression of one or more DNA repair enzymes. Since reduced DNA repair expression is often present in a field cancerization or a field defect, it is likely to have been an early step in progression to the cancer. Field defects associated with gastrointestinal tract cancers also commonly displayed reduced apoptosis competence, aberra" https://en.wikipedia.org/wiki/Ground%20bounce,"In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate. Description Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low impedance connection (or sufficiently high capacitance) to ground. In this phenomenon, when the base of an NPN transistor is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter-ground connection is pulled partially high, sometimes by several volts, thus raising the local ground, as perceived at the gate, to a value significantly above true ground. Relative to this local ground, the base voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces. Ground bounce is one of the leading causes of ""hung"" or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one nor a zero at clock time, or causes untoward effects in the clock itself. A similar voltage sag phenomenon may be seen on the collector side, called supply voltage sag (or VCC sag), where VCC is pulled unnaturally low. As a whole, ground bounce is a major issue in nanometer range technologies in VLSI. Ground bounce can also occur when the circuit board has poorly designed ground paths. Improper ground or VCC can lead to local variations in the ground level between various components. This is most commonly seen in circuit boards that have ground and VCC paths on the surfaces of the board. Reduction Ground bounce may be reduced by placing a 10–30-ohm resistor in series to each of the switching outputs " https://en.wikipedia.org/wiki/Hatch%20mark,"Hatch marks (also called hash marks or tick marks) are a form of mathematical notation. They are used in three ways as: Unit and value marks — as on a ruler or number line Congruence notation in geometry — as on a geometric figure Graphed points — as on a graph Hatch marks are frequently used as an abbreviation of some common units of measurement. In regard to distance, a single hatch mark indicates feet, and two hatch marks indicate inches. In regard to time, a single hatch mark indicates minutes, and two hatch marks indicate seconds. In geometry and trigonometry, such marks are used following an elevated circle to indicate degrees, minutes, and seconds — Hatch marks can probably be traced to hatching in art works, where the pattern of the hatch marks represents a unique tone or hue. Different patterns indicate different tones. Unit and value marks Unit-and-value hatch marks are short vertical line segments which mark distances. They are seen on rulers and number lines. The marks are parallel to each other in an evenly-spaced manner. The distance between adjacent marks is one unit. Longer line segments are used for integers and natural numbers. Shorter line segments are used for fractions. Hatch marks provide a visual clue as to the value of specific points on the number line, even if some hatch marks are not labeled with a number. Hatch marks are typically seen in number theory and geometry. <----|----|----|----|----|----|----|----|----|----|----|----|----|----|----> -3 -2 -1 0 1 2 3 Congruency notation In geometry, hatch marks are used to denote equal measures of angles, arcs, line segments, or other elements. Hatch marks for congruence notation are in the style of tally marks or of Roman numerals – with some qualifications. These marks are without serifs, and some patterns are not used. For example, the numbers I, II, III, V, and X are used, but IV and VI are not used, since a rotation of " https://en.wikipedia.org/wiki/Software-defined%20perimeter,"A software-defined perimeter (SDP), also called a ""black cloud"", is an approach to computer security which evolved from the work done at the Defense Information Systems Agency (DISA) under the Global Information Grid (GIG) Black Core Network initiative around 2007. Software-defined perimeter (SDP) framework was developed by the Cloud Security Alliance (CSA) to control access to resources based on identity. Connectivity in a Software Defined Perimeter is based on a need-to-know model, in which device posture and identity are verified before access to application infrastructure is granted. Application infrastructure is effectively “black” (a DoD term meaning the infrastructure cannot be detected), without visible DNS information or IP addresses. The inventors of these systems claim that a Software Defined Perimeter mitigates the most common network-based attacks, including: server scanning, denial of service, SQL injection, operating system and application vulnerability exploits, man-in-the-middle, pass-the-hash, pass-the-ticket, and other attacks by unauthorized users. Background The premise of the traditional enterprise network architecture is to create an internal network separated from the outside world by a fixed perimeter that consists of a series of firewall functions that block external users from coming in, but allows internal users to get out. Traditional fixed perimeters help protect internal services from external threats via simple techniques for blocking visibility and accessibility from outside the perimeter to internal applications and infrastructure. But the weaknesses of this traditional fixed perimeter model are becoming ever more problematic because of the popularity of user-managed devices and phishing attacks, providing untrusted access inside the perimeter, and SaaS and IaaS extending the perimeter into the internet. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the" https://en.wikipedia.org/wiki/List%20of%20exceptional%20set%20concepts,"This is a list of exceptional set concepts. In mathematics, and in particular in mathematical analysis, it is very useful to be able to characterise subsets of a given set X as 'small', in some definite sense, or 'large' if their complement in X is small. There are numerous concepts that have been introduced to study 'small' or 'exceptional' subsets. In the case of sets of natural numbers, it is possible to define more than one concept of 'density', for example. See also list of properties of sets of reals. Almost all Almost always Almost everywhere Almost never Almost surely Analytic capacity Closed unbounded set Cofinal (mathematics) Cofinite Dense set IP set 2-large Large set (Ramsey theory) Meagre set Measure zero Natural density Negligible set Nowhere dense set Null set, conull set Partition regular Piecewise syndetic set Schnirelmann density Small set (combinatorics) Stationary set Syndetic set Thick set Thin set (Serre) Exceptional Exceptional" https://en.wikipedia.org/wiki/Versatile%20Service%20Engine,"Versatile Service Engine is a second generation IP Multimedia Subsystem developed by Nortel Networks that is compliant with Advanced Telecommunications Computing Architecture specifications. Nortel's versatile service engine provides capability to telecommunication service provider to offer global System for mobile communications and code-division multiple access services in both wireline and wireless mode. History The Versatile Service Engine is a joint effort of Nortel and Motorola. The aim of collaboration was to develop an Advanced Telecommunications Computing Architecture compliant platform for Nortel IP Multimedia Subsystem applications. Nortel joined the PCI Industrial Computer Manufacturers Group in 2002 and the work on Versatile Service Engine was started in 2004. Architecture A single versatile service engine frame consists of three shelves, each shelf having three slots. A single slot can have many sub-slots staging a blade in it. Advanced Telecommunications Computing Architecture blades can be processors, switches, AMC carriers, etc. A typical shelf will contain one or more switch blades and several processor blades. The power supply and cooling fans are located in the back pane of the Versatile Service Engine. Ericsson ownership After Nortel Networks filed for bankruptcy protection in January 2009, Ericsson telecommunications then acquired the code-division multiple access and LTE based assets of then Canada's largest telecom equipment maker, hence taking the ownership of Versatile service engine." https://en.wikipedia.org/wiki/Square%20root%20of%206,"The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as: and in exponent form as: It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. Rational approximations The square root of 6 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …, and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: The convergents, expressed as , satisfy alternately the Pell's equations When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction: The Babylonian " https://en.wikipedia.org/wiki/Biology,"Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments. Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them. Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment. History The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti" https://en.wikipedia.org/wiki/Capillary%20electrochromatography,"In chemical analysis, capillary electrochromatography (CEC) is a chromatographic technique in which the mobile phase is driven through the chromatographic bed by electro-osmosis. Capillary electrochromatography is a combination of two analytical techniques, high-performance liquid chromatography and capillary electrophoresis. Capillary electrophoresis aims to separate analytes on the basis of their mass-to-charge ratio by passing a high voltage across ends of a capillary tube, which is filled with the analyte. High-performance liquid chromatography separates analytes by passing them, under high pressure, through a column filled with stationary phase. The interactions between the analytes and the stationary phase and mobile phase lead to the separation of the analytes. In capillary electrochromatography capillaries, packed with HPLC stationary phase, are subjected to a high voltage. Separation is achieved by electrophoretic migration of solutes and differential partitioning. Principle Capillary electrochromatography (CEC) combines the principles used in HPLC and CE. The mobile phase is driven across the chromatographic bed using electroosmosis instead of pressure (as in HPLC). Electroosmosis is the motion of liquid induced by an applied potential across a porous material, capillary tube, membrane or any other fluid conduit. Electroosmotic flow is caused by the Coulomb force induced by an electric field on net mobile electric charge in a solution. Under alkaline conditions, the surface silanol groups of the fused silica will become ionised leading to a negatively charged surface. This surface will have a layer of positively charged ions in close proximity which are relatively immobilised. This layer of ions is called the Stern layer. The thickness of the double layer is given by the formula: where εr is the relative permittivity of the medium, εo is the permittivity of vacuum, R is the universal gas constant, T is the absolute temperature, c is the molar concentrati" https://en.wikipedia.org/wiki/RapidIO,"The RapidIO architecture is a high-performance packet-switched electrical connection technology. RapidIO supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect. History The RapidIO protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies. Releases The RapidIO specification revision 1.1 (3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption. The RapidIO specification revision 1.2, released in June 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, imaging and military compute. The RapidIO specification revision 1.3 was released in June 2005. The RapidIO specification revision 2.0 (6xN Gen2), was released in March 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has repeated and expanded the commercial success of the 1.2 specification. The RapidIO specification revision 2.1 was released in September 2009. The RapidIO specification revision 2.2 was released in May 2011. The RapidIO specification revision 3.0 (10xN Gen3), was released in October 2013, has the following changes and improvements compared to the 2.x specifications: Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach" https://en.wikipedia.org/wiki/Application-specific%20instruction%20set%20processor,"An application-specific instruction set processor (ASIP) is a component used in system on a chip design. The instruction set architecture of an ASIP is tailored to benefit a specific application. This specialization of the core provides a tradeoff between the flexibility of a general purpose central processing unit (CPU) and the performance of an application-specific integrated circuit (ASIC). Some ASIPs have a configurable instruction set. Usually, these cores are divided into two parts: static logic which defines a minimum ISA (instruction-set architecture) and configurable logic which can be used to design new instructions. The configurable logic can be programmed either in the field in a similar fashion to a field-programmable gate array (FPGA) or during the chip synthesis. ASIPs have two ways of generating code: either through a retargetable code generator or through a retargetable compiler generator. The retargetable code generator uses the application, ISA, and Architecture Template to create the code generator for the object code. The retargetable compiler generator uses only the ISA and Architecture Template as the basis for creating the compiler. The application code will then be used by the compiler to create the object code. ASIPs can be used as an alternative of hardware accelerators for baseband signal processing or video coding. Traditional hardware accelerators for these applications suffer from inflexibility. It is very difficult to reuse the hardware datapath with handwritten finite-state machines (FSM). The retargetable compilers of ASIPs help the designer to update the program and reuse the datapath. Typically, the ASIP design is more or less dependent on the tool flow because designing a processor from scratch can be very complicated. One approach is to describe the processor using a high level language and then to automatically generate the ASIP's software toolset. Examples RISC-V Instruction Set Architecture (ISA) provides minimum base ins" https://en.wikipedia.org/wiki/Globoid%20%28botany%29,"A globoid is a spherical crystalline inclusion in a protein body found in seed tissues that contains phytate and other nutrients for plant growth. These are found in several plants, including wheat and the genus Cucurbita. These nutrients are eventually completely depleted during seedling growth. In Cucurbita maxima, globoids form as early as the 3rd day of seedling growth. They are located in conjunction with a larger crystalloid. They are electron–dense and vary widely in size." https://en.wikipedia.org/wiki/Multiresolution%20Fourier%20transform,"Multiresolution Fourier Transform is an integral fourier transform that represents a specific wavelet-like transform with a fully scalable modulated window, but not all possible translations. Comparison of Fourier transform and wavelet transform The Fourier transform is one of the most common approaches when it comes to digital signal processing and signal analysis. It represents a signal through sine and cosine functions thus transforming the time-domain into frequency-domain. A disadvantage of the Fourier transform is that both sine and cosine function are defined in the whole time plane, meaning that there is no time resolution. Certain variants of Fourier transform, such as Short Time Fourier Transform (STFT) utilize a window for sampling, but the window length is fixed meaning that the results will be satisfactory only for either low or high frequency components. Fast fourier transform (FFT) is used often because of its computational speed, but shows better results for stationary signals. On the other hand, the wavelet transform can improve all the aforementioned downsides. It preserves both time and frequency information and it uses a window of variable length, meaning that both low and high frequency components will be derived with higher accuracy than the Fourier transform. The wavelet transform also shows better results in transient states. Multiresolution Fourier Transform leverages the advantageous properties of the wavelet transform and uses them for Fourier transform. Definition Let be a function that has its Fourier transform defined as    The time line can be split by intervals of length π/ω with centers at integer multiples of π/ω    Then, new transforms of function can be introduced       and       where , when n is an integer. Functions and can be used in order to define the complex Fourier transform    Then, set of points in the frequency-time plane is defined for the computation of the introduced transforms   " https://en.wikipedia.org/wiki/Cardiac%20Pacemakers%2C%20Inc.,"Cardiac Pacemakers, Inc.(CPI), doing business as Guidant Cardiac Rhythm Management, manufactured implantable cardiac rhythm management devices, such as pacemakers and defibrillators. It sold microprocessor-controlled insulin pumps and equipment to regulate heart rhythm. It developed therapies to treat irregular heartbeat. The company was founded in 1971 and is based in St. Paul, Minnesota. Cardiac Pacemakers, Inc. is a subsidiary of Boston Scientific Corporation. Early history CPI was founded in February 1972 in St. Paul, Minnesota. The first $50,000 capitalization for CPI was raised from a phone booth on the Minneapolis skyway system. They began designing and testing their implantable cardiac pacemaker powered with a new longer-life lithium battery in 1971. The first heart patient to receive a CPI pacemaker emerged from surgery in June 1973. Within two years, the upstart company that challenged Medtronic had sold approximately 8,500 pacemakers. Medtronic at the time had 65% of the artificial pacemaker market. CPI was the first spin-off from Medtronic. It competition using the world's first lithium-powered pacemaker. Medtronic's market share plummeted to 35%. Founding partners Anthony Adducci, Manny Villafaña, Jim Baustert, and Art Schwalm, were former Medtronic employees. Lawsuits ensued, all settled out of court. Acquisition In early 1978, CPI was concerned about a friendly takeover attempt. Despite impressive sales, the company's stock price had fluctuated wildly the year before, dropping from $33 to $11 per share. Some speculated that the stock was being sold short, while others attributed the price to the natural volatility of high-tech stock. As a one-product company, CPI was susceptible to changing market conditions, and its founders knew they needed to diversify. They considered two options: acquiring other medical device companies or being acquired themselves. They chose the latter. Several companies expressed interest in acquiring CPI, including 3M," https://en.wikipedia.org/wiki/Magic%20%28software%29,"Magic is an electronic design automation (EDA) layout tool for very-large-scale integration (VLSI) integrated circuit (IC) originally written by John Ousterhout and his graduate students at UC Berkeley. Work began on the project in February 1983. A primitive version was operational by April 1983, when Joan Pendleton, Shing Kong and other graduate student chip designers suffered through many fast revisions devised to meet their needs in designing the SOAR CPU chip, a follow-on to Berkeley RISC. Fearing that Ousterhout was going to propose another name that started with ""C"" to match his previous projects Cm*, Caesar, and Crystal, Gordon Hamachi proposed the name Magic because he liked the idea of being able to say that people used magic to design chips. The rest of the development team enthusiastically agreed to this proposal after he devised the backronym Manhattan Artwork Generator for Integrated Circuits. The Magic software developers called themselves magicians, while the chip designers were Magic users. As free and open-source software, subject to the requirements of the BSD license, Magic continues to be popular because it is easy to use and easy to expand for specialized tasks. Differences The main difference between Magic and other VLSI design tools is its use of ""corner-stitched"" geometry, in which all layout is represented as a stack of planes, and each plane consists entirely of ""tiles"" (rectangles). The tiles must cover the entire plane. Each tile consists of an (X, Y) coordinate of its lower left-hand corner, and links to four tiles: the right-most neighbor on the top, the top-most neighbor on the right, the bottom-most neighbor on the left, and the left-most neighbor on the bottom. With the addition of the type of material represented by the tile, the layout geometry in the plane is exactly specified. The corner-stitched geometry representation leads to the concept of layout as ""paint"" to be applied to, or erased from, a canvas. This is con" https://en.wikipedia.org/wiki/Spectral%20concentration%20problem,"The spectral concentration problem in Fourier analysis refers to finding a time sequence of a given length whose discrete Fourier transform is maximally localized on a given frequency interval, as measured by the spectral concentration. Spectral concentration The discrete Fourier transform (DFT) U(f) of a finite series , is defined as In the following, the sampling interval will be taken as Δt = 1, and hence the frequency interval as f ∈ [-½,½]. U(f) is a periodic function with a period 1. For a given frequency W such that 0 ""); System.out.flush(); send(in.readLine().toLowerCase()); } } public void end() throws Exception { chan" https://en.wikipedia.org/wiki/List%20of%20scattering%20experiments,"This is a list of scattering experiments. Specific experiments of historical significance Davisson–Germer experiment Gold foil experiments, performed by Geiger and Marsden for Rutherford which discovered the atomic nucleus Elucidation of the structure of DNA by X-ray crystallography Discovery of the antiproton at the Bevatron Discovery of W and Z bosons at CERN Discovery of the Higgs boson at the Large Hadron Collider MINERνA Types of experiment Optical methods Compton scattering Raman scattering X-ray crystallography Biological small-angle scattering with X-rays, or Small-angle X-ray scattering Static light scattering Dynamic light scattering Polymer scattering with X-rays Neutron-based methods Neutron scattering Biological small-angle scattering with neutrons, or Small-angle neutron scattering Polymer scattering with neutrons Particle accelerators Electrostatic nuclear accelerator Linear induction accelerator Betatron Linear particle accelerator Cyclotron Synchrotron Physics-related lists Physics experiments Chemistry-related lists Biology-related lists" https://en.wikipedia.org/wiki/Signaling%20compression,"For data compression, signaling compression, or SigComp, is a compression method designed especially for compression of text-based communication data as SIP or RTSP. SigComp had originally been defined in RFC 3320 and was later updated with RFC 4896. A Negative Acknowledgement Mechanism for Signaling Compression is defined in RFC 4077. The SigComp work is performed in the ROHC working group in the transport area of the IETF. Overview SigComp specifications describe a compression schema that is located in between the application layer and the transport layer (e.g. between SIP and UDP). It is implemented upon a virtual machine configuration which executes a specific set of commands that are optimized for decompression purposes (namely UDVM, Universal Decompressor Virtual Machine). One strong point for SigComp is that the bytecode to decode messages can be sent over SigComp itself, so this allows to use any kind of compression schema given that it is expressed as bytecode for the UDVM. Thus any SigComp compatible device may use compression mechanisms that did not exist when it was released without any firmware change. Additionally, some decoders may be already been standardised, so SigComp may recall that code so it is not needed to be sent over the connection. To assure that a message is decodable the only requirement is that the UDVM code is available, so the compression of messages is executed off the virtual machine, and native code can be used. As an independent system a mechanism to signal the application conversation (e.g. a given SIP session), a compartment mechanism is used, so a given application may have any given number of different, independent conversations, while persisting all the session status (as needed/specified per compression schema and UDVM code). General architecture" https://en.wikipedia.org/wiki/List%20of%20heliophysics%20missions," This is a list of missions supporting heliophysics, including solar observatory missions, solar orbiters, and spacecraft studying the solar wind. Past and current missions Proposed missions Graphic See also List of solar telescopes" https://en.wikipedia.org/wiki/Golomb%E2%80%93Dickman%20constant,"In mathematics, the Golomb–Dickman constant arises in the theory of random permutations and in number theory. Its value is It is not known whether this constant is rational or irrational. Definitions Let an be the average — taken over all permutations of a set of size n — of the length of the longest cycle in each permutation. Then the Golomb–Dickman constant is In the language of probability theory, is asymptotically the expected length of the longest cycle in a uniformly distributed random permutation of a set of size n. In number theory, the Golomb–Dickman constant appears in connection with the average size of the largest prime factor of an integer. More precisely, where is the largest prime factor of k . So if k is a d digit integer, then is the asymptotic average number of digits of the largest prime factor of k. The Golomb–Dickman constant appears in number theory in a different way. What is the probability that second largest prime factor of n is smaller than the square root of the largest prime factor of n? Asymptotically, this probability is . More precisely, where is the second largest prime factor n. The Golomb-Dickman constant also arises when we consider the average length of the largest cycle of any function from a finite set to itself. If X is a finite set, if we repeatedly apply a function f: X → X to any element x of this set, it eventually enters a cycle, meaning that for some k we have for sufficiently large n; the smallest k with this property is the length of the cycle. Let bn be the average, taken over all functions from a set of size n to itself, of the length of the largest cycle. Then Purdom and Williams proved that Formulae There are several expressions for . These include: where is the logarithmic integral, where is the exponential integral, and and where is the Dickman function. See also Random permutation Random permutation statistics External links" https://en.wikipedia.org/wiki/Zero-overhead%20looping,"Zero-overhead looping is a feature of some processor instruction sets whose hardware can repeat the body of a loop automatically, rather than requiring software instructions which take up cycles (and therefore time) to do so. Zero-overhead loops are common in digital signal processors and some CISC instruction sets. Background In many instruction sets, a loop must be implemented by using instructions to increment or decrement a counter, check whether the end of the loop has been reached, and if not jump to the beginning of the loop so it can be repeated. Although this typically only represents around 3–16 bytes of space for each loop, even that small amount could be significant depending on the size of the CPU caches. More significant is that those instructions each take time to execute, time which is not spent doing useful work. The overhead of such a loop is apparent compared to a completely unrolled loop, in which the body of the loop is duplicated exactly as many times as it will execute. In that case, no space or execution time is wasted on instructions to repeat the body of the loop. However, the duplication caused by loop unrolling can significantly increase code size, and the larger size can even impact execution time due to cache misses. (For this reason, it's common to only partially unroll loops, such as transforming it into a loop which performs the work of four iterations in one step before repeating. This balances the advantages of unrolling with the overhead of repeating the loop.) Moreover, completely unrolling a loop is only possible for a limited number of loops: those whose number of iterations is known at compile time. For example, the following C code could be compiled and optimized into the following x86 assembly code: Implementation Processors with zero-overhead looping have machine instructions and registers to automatically repeat one or more instructions. Depending on the instructions available, these may only be suitable for count-cont" https://en.wikipedia.org/wiki/Step%20response,"The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter. From a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another. Formal mathematical description This section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system : all notations and assumptions required for the following description are listed here. is the evolution parameter of the system, called ""time"" for the sake of simplicity, is the state of the system at time , called ""output"" for the sake of simplicity, is the dynamical system evolution function, is the dynamical system initial state, is the Heaviside step function Nonlinear dynamical system For a general dynamical system, the step response is defined as follows: It is the evolution function when the control inputs (or source term, or forcing inputs) are Heaviside functions: the notation emphasizes this concept showing H(t) as a subscript. Linear dynamical system For a linear time-invariant (LTI) black box, let for" https://en.wikipedia.org/wiki/Hydrogen%20sulfide%20chemosynthesis,"Hydrogen sulfide chemosynthesis is a form of chemosynthesis which uses hydrogen sulfide. It is common in hydrothermal vent microbial communities Due to the lack of light in these environments this is predominant over photosynthesis Giant tube worms use bacteria in their trophosome to fix carbon dioxide (using hydrogen sulfide as their energy source) and produce sugars and amino acids. Some reactions produce sulfur: hydrogen sulfide chemosynthesis: 18H2S + 6CO2 + 3O2 → C6H12O6 (carbohydrate) + 12H2O + 18S In the above process, hydrogen sulfide serves as a source of electrons for the reaction. Instead of releasing oxygen gas while fixing carbon dioxide as in photosynthesis, hydrogen sulfide chemosynthesis produces solid globules of sulfur in the process. Mechanism of Action In deep sea environments, different organisms have been observed to have the ability to oxidize reduced compounds such as hydrogen sulfide. Oxidation is the loss of electrons in a chemical reaction. Most chemosynthetic bacteria form symbiotic associations with other small eukaryotes The electrons that are released from hydrogen sulfide will provide the energy to sustain a proton gradient across the bacterial cytoplasmic membrane. This movement of protons will eventually result in the production of adenosine triphosphate. The amount of energy derived from the process is also dependent on the type of final electron acceptor. Other Examples Of Chemosynthetic Organisms (using H2S as electron donor) Across the world, researchers have observed different organisms in various locations capable of carrying out the process. Yang and colleagues in 2011 surveyed five Yellowstone thermal springs of varying depths and observed that the distribution of chemosynthetic microbes coincided with temperature as Sulfurihydrogenibiom was found at higher temperatures while Thiovirga inhabited cooler waters Miyazaki et.al., in 2020 also found an endosymbiont capable of hydrogen sulfide chemosynthesis which conta" https://en.wikipedia.org/wiki/Transistor%20array,"Transistor arrays consist of two or more transistors on a common substrate. Unlike more highly integrated circuits, the transistors can be used individually like discrete transistors. That is, the transistors in the array are not connected to each other to implement a specific function. Transistor arrays can consist of bipolar junction transistors or field-effect transistors. There are three main motivations for combining several transistors on one chip and in one package: to save circuit board space and to reduce the board production cost (only one component needs to be populated instead of several) to ensure closely matching parameters between the transistors (which is almost guaranteed when the transistors on one chip are manufactured simultaneously and subject to identical manufacturing process variations) to ensure a closely matching thermal drift of parameters between the transistors (which is achieved by having the transistors in extremely close proximity) The matching parameters and thermal drift are crucial for various analogue circuits such as differential amplifiers, current mirrors, and log amplifiers. The reduction in circuit board area is particularly significant for digital circuits where several switching transistors are combined in one package. Often the transistors here are Darlington pairs with a common emitter and flyback diodes, e.g. ULN2003A. While this stretches the above definition of a transistor array somewhat, the term is still commonly applied. A peculiarity of transistor arrays is that the substrate is often available as a separate pin (labelled substrate, bulk, or ground). Care is required when connecting the substrate in order to maintain isolation between the transistors in the array as p–n junction isolation is usually used. For instance, for an array of NPN transistors, the substrate must be connected to the most negative voltage in the circuit." https://en.wikipedia.org/wiki/Software%20Guard%20Extensions,"Intel Software Guard Extensions (SGX) is a set of instruction codes implementing trusted execution environment that are built into some Intel central processing units (CPUs). They allow user-level and operating system code to define protected private regions of memory, called enclaves. SGX is designed to be useful for implementing secure remote computation, secure web browsing, and digital rights management (DRM). Other applications include concealment of proprietary algorithms and of encryption keys. SGX involves encryption by the CPU of a portion of memory (the enclave). Data and code originating in the enclave are decrypted on the fly within the CPU, protecting them from being examined or read by other code, including code running at higher privilege levels such the operating system and any underlying hypervisors. While this can mitigate many kinds of attacks, it does not protect against side-channel attacks. A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generation Intel Core Processors, but development continues on Intel Xeon for cloud and enterprise use. Details SGX was first introduced in 2015 with the sixth generation Intel Core microprocessors based on the Skylake microarchitecture. Support for SGX in the CPU is indicated in CPUID ""Structured Extended feature Leaf"", EBX bit 02, but its availability to applications requires BIOS/UEFI support and opt-in enabling which is not reflected in CPUID bits. This complicates the feature detection logic for applications. Emulation of SGX was added to an experimental version of the QEMU system emulator in 2014. In 2015, researchers at the Georgia Institute of Technology released an open-source simulator named ""OpenSGX"". One example of SGX used in security was a demo application from wolfSSL using it for cryptography algorithms. Intel Goldmont Plus (Gemini Lake) microarchitecture also contains support for Intel SGX. Both in the 11th and 12th generations of Intel Core processo" https://en.wikipedia.org/wiki/Nitro%20Zeus,"Nitro Zeus is the project name for a well funded comprehensive cyber attack plan created as a mitigation strategy after the Stuxnet malware campaign and its aftermath. Unlike Stuxnet, that was loaded onto a system after the design phase to affect its proper operation, Nitro Zeus's objectives are built into a system during the design phase unbeknownst to the system users. This built-in feature allows a more assured and effective cyber attack against the system's users. The information about its existence was raised during research and interviews carried out by Alex Gibney for his Zero Days documentary film. The proposed long term widespread infiltration of major Iranian systems would disrupt and degrade communications, power grid, and other vital systems as desired by the cyber attackers. This was to be achieved by electronic implants in Iranian computer networks. The project was seen as one pathway in alternatives to full-scale war. See also Kill Switch Backdoor (computing) Operation Olympic Games" https://en.wikipedia.org/wiki/Proof%20by%20infinite%20descent,"In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions. Typically, one shows that if a solution to a problem existed, which in some sense was related to one or more natural numbers, it would necessarily imply that a second solution existed, which was related to one or more 'smaller' natural numbers. This in turn would imply a third solution related to smaller natural numbers, implying a fourth solution, therefore a fifth solution, and so on. However, there cannot be an infinity of ever-smaller natural numbers, and therefore by mathematical induction, the original premise—that any solution exists—is incorrect: its correctness produces a contradiction. An alternative way to express this is to assume one or more solutions or examples exists, from which a smallest solution or example—a minimal counterexample—can then be inferred. Once there, one would try to prove that if a smallest solution exists, then it must imply the existence of a smaller solution (in some sense), which again proves that the existence of any solution would lead to a contradiction. The earliest uses of the method of infinite descent appear in Euclid's Elements. A typical example is Proposition 31 of Book 7, in which Euclid proves that every composite integer is divided (in Euclid's terminology ""measured"") by some prime number. The method was much later developed by Fermat, who coined the term and often used it for Diophantine equations. Two typical examples are showing the non-solvability of the Diophantine equation and prov" https://en.wikipedia.org/wiki/FPGA%20prototyping,"Field-programmable gate array prototyping (FPGA prototyping), also referred to as FPGA-based prototyping, ASIC prototyping or system-on-chip (SoC) prototyping, is the method to prototype system-on-chip and application-specific integrated circuit designs on FPGAs for hardware verification and early software development. Verification methods for hardware design as well as early software and firmware co-design have become mainstream. Prototyping SoC and ASIC designs with one or more FPGAs and electronic design automation (EDA) software has become a good method to do this. Why prototyping is important Running a SoC design on FPGA prototype is a reliable way to ensure that it is functionally correct. This is compared to designers only relying on software simulations to verify that their hardware design is sound. About a third of all current SoC designs are fault-free during first silicon pass, with nearly half of all re-spins caused by functional logic errors. A single prototyping platform can provide verification for hardware, firmware, and application software design functionality before the first silicon pass. Time-to-market (TTM) is reduced from FPGA prototyping: In today's technological driven society, new products are introduced rapidly, and failing to have a product ready at a given market window can cost a company a considerable amount of revenue. If a product is released too late of a market window, then the product could be rendered useless, costing the company its investment capital in the product. After the design process, FPGAs are ready for production, while standard cell ASICs take more than six months to reach production. Development cost: Development cost of 90-nm ASIC/SoC design tape-out is around $20 million, with a mask set costing over $1 million alone. Development costs of 45-nm designs are expected to top $40 million. With increasing cost of mask sets, and the continuous decrease of IC size, minimizing the number of re-spins is vital to the deve" https://en.wikipedia.org/wiki/Annatto,"Annatto ( or ) is an orange-red condiment and food coloring derived from the seeds of the achiote tree (Bixa orellana), native to tropical parts of the Americas. It is often used to impart a yellow or orange color to foods, but sometimes also for its flavor and aroma. Its scent is described as ""slightly peppery with a hint of nutmeg"" and flavor as ""slightly nutty, sweet and peppery"". The color of annatto comes from various carotenoid pigments, mainly bixin and norbixin, found in the reddish waxy coating of the seeds. The condiment is typically prepared by grinding the seeds to a powder or paste. Similar effects can be obtained by extracting some of the color and flavor principles from the seeds with hot water, oil, or lard, which are then added to the food. Annatto and its extracts are now widely used in an artisanal or industrial scale as a coloring agent in many processed food products, such as cheeses, dairy spreads, butter and margarine, custards, cakes and other baked goods, potatoes, snack foods, breakfast cereals, smoked fish, sausages, and more. In these uses, annatto is a natural alternative to synthetic food coloring compounds, but it has been linked to rare cases of food-related allergies. Annatto is of particular commercial value in the United States because the Food and Drug Administration considers colorants derived from it to be ""exempt of certification"". History The annatto tree B. orellana is believed to originate in tropical regions from Mexico to Brazil. It was probably not initially used as a food additive, but for other purposes such as ritual and decorative body painting (still an important tradition in many Brazilian native tribes, such as the Wari'), sunscreen, and insect repellent, and for medical purposes. It was used for Mexican manuscript painting in the 16th century. Annatto has been traditionally used as both a coloring and flavoring agent in various cuisines from Latin America, the Caribbean, the Philippines, and other countries w" https://en.wikipedia.org/wiki/Eirpac,"EIRPAC is Ireland's packet switched X.25 data network. It replaced Euronet in 1984. Eirpac uses the DNIC 2724. HEAnet was first in operation via X.25 4.8Kb Eirpac connections back in 1985. By 1991 most Universities in Ireland used 64k Eirpac VPN connections. Today Eirpac is owned and operated by Eircom but does not accept new applications for Eirpac: no reference is made on the products-offering on their website They began the process of migrating existing customers using more capable forms of telecommunications back in late April 2004. In 2001 Eirpac had approximately 5,000 customers dialing in daily via switched virtual circuits although those numbers have been declining rapidly. Eirpac is still an important element for data transfer in Ireland with numerous banks (automatic teller machines), telecoms switches, pager systems and other networks that utilise permanent virtual circuits. Connecting to Eirpac can be done using a simple AT compatible modem. The dial in number is 1511 + baud rate. So for example to connect at 28,800 bit/s would be ATDT 15112880. The user would then have to authenticate with their Eirpac NUI. The NUI (Network User Identification) consists of a name and password provided by Eir. Sources External links Official website Computer networking Internet in Ireland" https://en.wikipedia.org/wiki/List%20of%20Proton%20Synchrotron%20experiments,"This is a list of past and current experiments at the CERN Proton Synchrotron (PS) facility since its commissioning in 1959. The PS was CERN's first synchrotron and the world's highest energy particle accelerator at the time. It served as the flagship of CERN until the 1980s when its main role became to provide injection beams to other machines such as the Super Proton Synchrotron. The information is gathered from the INSPIRE-HEP database. See also Experiments List of Super Proton Synchrotron experiments List of Large Hadron Collider experiments Facilities CERN: European Organization for Nuclear Research PS: Proton Synchrotron SPS: Super Proton Synchrotron ISOLDE: On-Line Isotope Mass Separator ISR: Intersecting Storage Rings LEP: Large Electron–Positron Collider LHC: Large Hadron Collider" https://en.wikipedia.org/wiki/Welfare%20biology,"Welfare biology is a proposed cross-disciplinary field of research to study the positive and negative well-being of sentient individuals in relation to their environment. Yew-Kwang Ng first advanced the field in 1995. Since then, its establishment has been advocated for by a number of writers, including philosophers, who have argued for the importance of creating the research field, particularly in relation to wild animal suffering. Some researchers have put forward examples of existing research that welfare biology could draw upon and suggested specific applications for the research's findings. History Welfare biology was first proposed by the welfare economist Yew-Kwang Ng, in his 1995 paper ""Towards welfare biology: Evolutionary economics of animal consciousness and suffering"". In the paper, Ng defines welfare biology as the ""study of living things and their environment with respect to their welfare (defined as net happiness, or enjoyment minus suffering)."" He also distinguishes between ""affective"" and ""non-affective"" sentients, affective sentients being individuals with the capacity for perceiving the external world and experiencing pleasure or pain, while non-affective sentients have the capacity for perception, with no corresponding experience; Ng argues that because the latter experience no pleasure or suffering, ""[t]heir welfare is necessarily zero, just like nonsentients"". He concludes, based on his modelling of evolutionary dynamics, that suffering dominates enjoyment in nature. Matthew Clarke and Ng, in 2006, used Ng's welfare biology framework to analyse the costs, benefits and welfare implications of the culling of kangaroos—classified as affective sentients—in Puckapunyal, Australia. They concluded that while their discussion ""may give some support to the culling of kangaroos or other animals in certain circumstances, a more preventive measure may be superior to the resort to culling"". In the same year, Thomas Eichner and Rüdiger Pethi analyzed Ng's" https://en.wikipedia.org/wiki/Source%20transformation,"Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively. Process Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit. Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources. Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has" https://en.wikipedia.org/wiki/Computer%20architecture,"In computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation. History The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are: John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited John von Neumann's paper. The term ""architecture"" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of ""system architecture"", a term that seemed more useful than ""machine organization"". Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, ""Computer architecture, like other architecture, is the art of determining the" https://en.wikipedia.org/wiki/Programmable%20matter,"Programmable matter is matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing. History Programmable matter is a term originally coined in 1991 by Toffoli and Margolus to refer to an ensemble of fine-grained computing elements arranged in space. Their paper describes a computing substrate that is composed of fine-grained compute nodes distributed throughout space which communicate using only nearest neighbor interactions. In this context, programmable matter refers to compute models similar to cellular automata and lattice gas automata. The CAM-8 architecture is an example hardware realization of this model. This function is also known as ""digital referenced areas"" (DRA) in some forms of self-replicating machine science. In the early 1990s, there was a significant amount of work in reconfigurable modular robotics with a philosophy similar to programmable matter. As semiconductor technology, nanotechnology, and self-replicating machine technology have advanced, the use of the term programmable matter has changed to reflect the fact that it is possible to build an ensemble of elements which can be ""programmed"" to change their physical properties in reality, not just in simulation. Thus, programmable matter has come to mean ""any bulk substance which can be programmed to change its physical properties."" In the summer of 1998, in a discussion on artificial atoms and programmable matter, Wil McCarthy and G. Snyder coined the term ""quantum wellstone"" (or simply ""wellstone"") to describe this hypothetical but plausible form of programmable matter. McCarthy has used the term in his fiction. In 2002, Seth Goldstein and Todd Mowry started the claytronics project at Carnegie Mellon University to " https://en.wikipedia.org/wiki/Kendall%27s%20notation,"In queueing theory, a discipline within the mathematical theory of probability, Kendall's notation (or sometimes Kendall notation) is the standard system used to describe and classify a queueing node. D. G. Kendall proposed describing queueing models using three factors written A/S/c in 1953 where A denotes the time between arrivals to the queue, S the service time distribution and c the number of service channels open at the node. It has since been extended to A/S/c/K/N/D where K is the capacity of the queue, N is the size of the population of jobs to be served, and D is the queueing discipline. When the final three parameters are not specified (e.g. M/M/1 queue), it is assumed K = ∞, N = ∞ and D = FIFO. First example: M/M/1 queue A M/M/1 queue means that the time between arrivals is Markovian (M), i.e. the inter-arrival time follows an exponential distribution of parameter λ. The second M means that the service time is Markovian: it follows an exponential distribution of parameter μ. The last parameter is the number of service channel which one (1). Description of the parameters In this section, we describe the parameters A/S/c/K/N/D from left to right. A: The arrival process A code describing the arrival process. The codes used are: S: The service time distribution This gives the distribution of time of the service of a customer. Some common notations are: c: The number of servers The number of service channels (or servers). The M/M/1 queue has a single server and the M/M/c queue c servers. K: The number of places in the queue The capacity of queue, or the maximum number of customers allowed in the queue. When the number is at this maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, or infinite. Note: This is sometimes denoted c + K where K is the buffer size, the number of places in the queue above the number of servers c. N: The calling population The size of calling source. The size of" https://en.wikipedia.org/wiki/Informal%20mathematics,"Informal mathematics, also called naïve mathematics, has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics. The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism. Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed ""correct"" merely because they are useful), and statements derived by deductive reasoning. Terminology Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms. This can usefully be called therefore formal mathematics. Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology: it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics, which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians. The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified. History There has long been a standard account of the development of geometry in ancient Egypt, followed by Greek " https://en.wikipedia.org/wiki/Kiwi%20drive,"A Kiwi drive is a holonomic drive system of three omni-directional wheels (such as omni wheels or Mecanum wheels), 120 degrees from each other, that enables movement in any direction using only three motors. This is in contrast with non-holonomic systems such as traditionally wheeled or tracked vehicles which cannot move sideways without turning first. This drive system is similar to the Killough platform which achieves omni-directional travel using traditional non-omni-directional wheels in a three wheel configuration. Named after the Flightless national bird of New Zealand The Kiwi" https://en.wikipedia.org/wiki/Schnirelmann%20density,"In additive number theory, the Schnirelmann density of a sequence of numbers is a way to measure how ""dense"" the sequence is. It is named after Russian mathematician Lev Schnirelmann, who was the first to study it. Definition The Schnirelmann density of a set of natural numbers A is defined as where A(n) denotes the number of elements of A not exceeding n and inf is infimum. The Schnirelmann density is well-defined even if the limit of A(n)/n as fails to exist (see upper and lower asymptotic density). Properties By definition, and for all n, and therefore , and if and only if . Furthermore, Sensitivity The Schnirelmann density is sensitive to the first values of a set: . In particular, and Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity. Schnirelmann's theorems If we set , then Lagrange's four-square theorem can be restated as . (Here the symbol denotes the sumset of and .) It is clear that . In fact, we still have , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that and one sees that sumsetting once again yields a more populous set, namely all of . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture. Theorem. Let and be subsets of . Then Note that . Inductively, we have the following generalization. Corollary. Let be a finite family of subsets of . Then The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose. " https://en.wikipedia.org/wiki/Tensor%20Processing%20Unit,"Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale. Comparison to CPUs and GPUs Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, without hardware for rasterisation/texture mapping. The TPU ASICs are mounted in a heatsink assembly, which can fit in a hard drive slot within a data center rack, according to Norman Jouppi. Different types of processors are suited for different types of machine learning models. TPUs are well suited for CNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages for RNNs. History The tensor processing unit was announced in May 2016 at Google I/O, when the company said that the TPU had already been used inside their data centers for over a year. The chip has been specifically designed for Google's TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks. However, as of 2017 Google still used CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets. Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018, The New York Times reported that Google ""would allow other companies to buy access to those chips through its cloud-computing service."" Google has said that they were used in the AlphaGo versus Lee Sedol series of man-machine Go games, as well as in the AlphaZero system, which produced Chess, Shogi and Go playing programs f" https://en.wikipedia.org/wiki/Hardware%20for%20artificial%20intelligence,"Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. As of 2023, the market for AI hardware is dominated by GPUs. Lisp machines Lisp machines were developed in the late 1970s and early 1980s to make Artificial intelligence programs written in the programming language Lisp run faster. Dataflow architecture Dataflow architecture processors used for AI serve various purposes, with varied implementations like the polymorphic dataflow Convolution Engine by Kinara (formerly Deep Vision), structure-driven dataflow by Hailo, and dataflow scheduling by Cerebras. Component hardware AI accelerators Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced central processing units (CPUs) as the dominant means to train large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from Alex Net (2012) to Alpha Zero (2017), and found a 300,000-fold increase in the amount of compute needed, with a doubling-time trend of 3.4 months. Artificial Intelligence Hardware Components Cеntral Procеssing Units (CPUs) Evеry computеr systеm is built on cеntral procеssing units (CPUs). Thеy handle duties, do computations, and carry out ordеrs. Evеn if spеcializеd hardwarе is morе еffеctivе at handling AI activitiеs, CPUs arе still еssеntial for managing gеnеral computing tasks in AI systеms. Graphics Procеssing Units (GPUs) AI has sееn a dramatic transformation as a rеsult of graphics procеssing units (GPUs). Thеy arе pеrfеct for AI jobs that rеquirе handling massivе quantitiеs of data and intricatе mathеmatical opеrations bеcausе of thеir " https://en.wikipedia.org/wiki/Operating%20point,"The operating point is a specific point within the operation characteristic of a technical device. This point will be engaged because of the properties of the system and the outside influences and parameters. In electronic engineering establishing an operating point is called biasing. Wanted and unwanted operating points of a system The operating point of a system is the intersection point of the torque-speed curve of drive and machine. Both devices are linked with a shaft so the speed is always identical. The drive creates the torque which rotates both devices. The machine creates the counter-torque, e.g. by being a moved device which needs permanent energy or a wheel turning against the static friction of the track. The drive speed increases when the driving torque is higher than the counter-torque. The drive speed decreases when the counter-torque is higher than the driving torque. At the operating point, the driving torque and the counter-torque are balanced, so the speed does not change anymore. A speed change in a stable operating point creates a torque change which acts against this change of speed. A change in speed out of this stable operating point is only possible with a new control intervention. This can be changing the load of the machine or the power of the drive which both changes the torque because it is a change in the characteristic curves. The drive-machine system then runs to a new operating point with a different speed and a different balance of torques. Should the drive torque be higher than the counter torque at any time then the system does not have an operating point. The result will be that the speed increases up to the idle speed or even until destruction. Should the counter torque be higher at any times the speed will decrease until the system stops. Stable and unstable operating points Also in case of an unstable operating point the law of the balance of the torques is always valid. But when the operating point is unstab" https://en.wikipedia.org/wiki/Mechanobiology,"Mechanobiology is an emerging field of science at the interface of biology, engineering, chemistry and physics. It focuses on how physical forces and changes in the mechanical properties of cells and tissues contribute to development, cell differentiation, physiology, and disease. Mechanical forces are experienced and may be interpreted to give biological responses in cells. The movement of joints, compressive loads on the cartilage and bone during exercise, and shear pressure on the blood vessel during blood circulation are all examples of mechanical forces in human tissues. A major challenge in the field is understanding mechanotransduction—the molecular mechanisms by which cells sense and respond to mechanical signals. While medicine has typically looked for the genetic and biochemical basis of disease, advances in mechanobiology suggest that changes in cell mechanics, extracellular matrix structure, or mechanotransduction may contribute to the development of many diseases, including atherosclerosis, fibrosis, asthma, osteoporosis, heart failure, and cancer. There is also a strong mechanical basis for many generalized medical disabilities, such as lower back pain, foot and postural injury, deformity, and irritable bowel syndrome. Load sensitive cells Fibroblasts Skin fibroblasts are vital in development and wound repair and they are affected by mechanical cues like tension, compression and shear pressure. Fibroblasts synthesize structural proteins, some of which are mechanosensitive and form integral part of the extracellular Matrix (ECM) e. g collagen types I, III, IV, V VI, elastin, lamin etc. In addition to the structural proteins, fibroblasts make Tumor-Necrosis-Factor- alpha (TNF-α), Transforming-Growth-Factor-beta (TGF-β) and matrix metalloproteases that plays in tissue in tissue maintenance and remodeling. Chondrocytes Articular cartilage is the connective tissue that protects bones of load-bearing joints like knee, shoulder by providing a lubric" https://en.wikipedia.org/wiki/Pulse%20%28signal%20processing%29,"A pulse in signal processing is a rapid, transient change in the amplitude of a signal from a baseline value to a higher or lower value, followed by a rapid return to the baseline value. Pulse shapes Pulse shapes can arise out of a process called pulse-shaping. Optimum pulse shape depends on the application. Rectangular pulse These can be found in pulse waves, square waves, boxcar functions, and rectangular functions. In digital signals the up and down transitions between high and low levels are called the rising edge and the falling edge. In digital systems the detection of these sides or action taken in response is termed edge-triggered, rising or falling depending on which side of rectangular pulse. A digital timing diagram is an example of a well-ordered collection of rectangular pulses. Nyquist pulse A Nyquist pulse is one which meets the Nyquist ISI criterion and is important in data transmission. An example of a pulse which meets this condition is the sinc function. The sinc pulse is of some significance in signal-processing theory but cannot be produced by a real generator for reasons of causality. In 2013, Nyquist pulses were produced in an effort to reduce the size of pulses in optical fibers, which enables them to be packed 10 times more closely together, yielding a corresponding 10-fold increase in bandwidth. The pulses were more than 99 percent perfect and were produced using a simple laser and modulator. Dirac pulse A Dirac pulse has the shape of the Dirac delta function. It has the properties of infinite amplitude and its integral is the Heaviside step function. Equivalently, it has zero width and an area under the curve of unity. This is another pulse that cannot be created exactly in real systems, but practical approximations can be achieved. It is used in testing, or theoretically predicting, the impulse response of devices and systems, particularly filters. Such responses yield a great deal of information about the system. Gaussian " https://en.wikipedia.org/wiki/Spatial%20scale,"Spatial scale is a specific application of the term scale for describing or categorizing (e.g. into orders of magnitude) the size of a space (hence spatial), or the extent of it at which a phenomenon or process occurs. For instance, in physics an object or phenomenon can be called microscopic if too small to be visible. In climatology, a micro-climate is a climate which might occur in a mountain, valley or near a lake shore. In statistics, a megatrend is a political, social, economical, environmental or technological trend which involves the whole planet or is supposed to last a very large amount of time. The concept is also used in geography, astronomy, and meteorology. These divisions are somewhat arbitrary; where, on this table, mega- is assigned global scope, it may only apply continentally or even regionally in other contexts. The interpretations of meso- and macro- must then be adjusted accordingly. See also Astronomical units of length Cosmic distance ladder List of examples of lengths Orders of magnitude (length) Scale (analytical tool) Scale (geography) Scale (map) Scale (ratio) Location of Earth" https://en.wikipedia.org/wiki/Network%20agility,"Network Agility is an architectural discipline for computer networking. It can be defined as: The ability of network software and hardware to automatically control and configure itself and other network assets across any number of devices on a network. With regards to network hardware, network agility is used when referring to automatic hardware configuration and reconfiguration of network devices e.g. routers, switches, SNMP devices. Network agility, as a software discipline, borrows from many fields, both technical and commercial. On the technical side, network agility solutions leverage techniques from areas such as: Service-oriented architecture (SOA) Object-oriented design Architectural patterns Loosely coupled data streaming (e.g.: web services) Iterative design Artificial intelligence Inductive scheduling On-demand computing Utility computing Commercially, network agility is about solving real-world business problems using existing technology. It forms a three-way bridge between business processes, hardware resources, and software assets. In more detail, it takes, as input: 1 the business processes – i.e. what the network must achieve in real business terms; the hardware that resides within the network; and the set of software assets that run on this hardware. Much of this input can be obtained through automatic discovery – finding the hardware, its types and locations, software, licenses etc. The business processes can be inferred to a certain degree, but it is these processes that business managers need to be able to control and organize. Software resources discovered on the network can take a variety of forms – some assets may be licensed software products, others as blocks of software service code that can be accessed via some service enterprise portal, such as (but not necessarily) web services. These services may reside in-house, or they may be 'on-demand' via an on-line subscription service. Indeed, the primary motivation of network " https://en.wikipedia.org/wiki/Ceibo%20emulator,"A ceibo emulator is an in-circuit emulator for microcontrollers and microprocessors. These emulators use bond-out processors, which have internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor that is otherwise unobtainable. Supported microprocessors and microcontrollers include Atmel, Dallas Semiconductor, Infineon, Intel, Microchip, NEC, Philips, STMicroelectronics and Winbond." https://en.wikipedia.org/wiki/List%20of%20coordinate%20charts,"This article contains a non-exhaustive list of coordinate charts for Riemannian manifolds and pseudo-Riemannian manifolds. Coordinate charts are mathematical objects of topological manifolds, and they have multiple applications in theoretical and applied mathematics. When a differentiable structure and a metric are defined, greater structure exists, and this allows the definition of constructs such as integration and geodesics. Charts for Riemannian and pseudo-Riemannian surfaces The following charts (with appropriate metric tensors) can be used in the stated classes of Riemannian and pseudo-Riemannian surfaces: Radially symmetric surfaces: Hyperspherical coordinates Surfaces embedded in E3: Monge chart Certain minimal surfaces: Asymptotic chart (see also asymptotic line) Euclidean plane E2: Cartesian chart Sphere S2: Spherical coordinates Stereographic chart Central projection chart Axial projection chart Mercator chart Hyperbolic plane H2: Polar chart Stereographic chart (Poincaré model) Upper half-space chart (Poincaré model) Central projection chart (Klein model) Mercator chart AdS2 (or S1,1) and dS2 (or H1,1): Central projection Sn Hopf chart Hn Upper half-space chart (Poincaré model) Hopf chart The following charts apply specifically to three-dimensional manifolds: Axially symmetric manifolds: Cylindrical chart Parabolic chart Hyperbolic chart Toroidal chart Three-dimensional Euclidean space E3: Cartesian Polar spherical chart Cylindrical chart Elliptical cylindrical, hyperbolic cylindrical, parabolic cylindrical charts Parabolic chart Hyperbolic chart Prolate spheroidal chart (rational and trigonometric forms) Oblate spheroidal chart (rational and trigonometric forms) Toroidal chart Cassini toroidal chart and Cassini bipolar chart Three-sphere S3 Polar chart Stereographic chart Hopf chart Hyperbolic three-space H3 Polar chart Upper half space chart (Poincaré model) Hopf chart See also Coordinate chart Coordinate system Metric tensor List of mathemat" https://en.wikipedia.org/wiki/FITkit%20%28hardware%29,"FITkit is a hardware platform used for educational purposes at the Brno University of Technology in the Czech Republic. FITkit The FITkit contains a low-power microcontroller, a field programmable gate array chip (FPGA) and a set of peripherals. Utilizing advanced reconfigurable hardware, the FITkit may be modified to suit various tasks. Configuration of the FPGA chip can be specified using the VHDL hardware description language (i.e. VHSIC hardware description language). Software for the Microcontroller is written in C and compiled using the GNU Compiler Collection. Configuration of the FPGA chip is synthesized from the source VHDL code using professional design tools, which are also available free of charge. Use in education The FITkit serves as an educational tool in several courses throughout the bachelor's and master's degree programmes. Students are expected to create an FPGA interpreter design of a simple programming language (such as Brainfuck) as part of the Design of Computer Systems course. Licensing The project is developed as an open-source (software) and open-core (hardware), under the BSD license. Related projects QDevKit, multiplatform development environment for FITkit (Linux, BSD and Microsoft Windows operating systems)" https://en.wikipedia.org/wiki/Quark%20model,"In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks that give rise to the quantum numbers of the hadrons. The quark model underlies ""flavor SU(3)"", or the Eightfold Way, the successful classification scheme organizing the large number of lighter hadrons that were being discovered starting in the 1950s and continuing through the 1960s. It received experimental verification beginning in the late 1960s and is a valid effective classification of them to date. The model was independently proposed by physicists Murray Gell-Mann, who dubbed them ""quarks"" in a concise paper, and George Zweig, who suggested ""aces"" in a longer manuscript. André Petermann also touched upon the central ideas from 1963 to 1965, without as much quantitative substantiation. Today, the model has essentially been absorbed as a component of the established quantum field theory of strong and electroweak particle interactions, dubbed the Standard Model. Hadrons are not really ""elementary"", and can be regarded as bound states of their ""valence quarks"" and antiquarks, which give rise to the quantum numbers of the hadrons. These quantum numbers are labels identifying the hadrons, and are of two kinds. One set comes from the Poincaré symmetry—JPC, where J, P and C stand for the total angular momentum, P-symmetry, and C-symmetry, respectively. The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet. All quarks are assigned a baryon number of . Up, charm and top quarks have an electric charge of +, while the down, strange, and bottom quarks have an electric charge of −. Antiquarks have the opposite quantum numbers. Quarks are spin- particles, and thus fermions. Each quark" https://en.wikipedia.org/wiki/Disk%20controller,"The disk controller is the controller circuit which enables the CPU to communicate with a hard disk, floppy disk or other kind of disk drive. It also provides an interface between the disk drive and the bus connecting it to the rest of the system. Early disk controllers were identified by their storage methods and data encoding. They were typically implemented on a separate controller card. Modified frequency modulation (MFM) controllers were the most common type in small computers, used for both floppy disk and hard disk drives. Run length limited (RLL) controllers used data compression to increase storage capacity by about 50%. Priam created a proprietary storage algorithm that could double the disk storage. Shugart Associates Systems Interface (SASI) was a predecessor to SCSI. Modern disk controllers are integrated into the disk drive as peripheral controllers. For example, disks called ""SCSI disks"" have built-in SCSI controllers. In the past, before most SCSI controller functionality was implemented in a single chip, separate SCSI controllers interfaced disks to the SCSI bus. These integrated peripheral controllers communicate with a host adapter in the host system over a standardized, high-level storage bus interface. The most common types of interfaces provided nowadays by host controllers are PATA (IDE) and Serial ATA for home use. High-end disks use Parallel SCSI, Fibre Channel or Serial Attached SCSI. Disk controllers can also control the timing of access to flash memory which is not mechanical in nature (i.e. no physical disk). Disk controller versus host adapter The component that allows a computer to talk to a peripheral bus is host adapter or host bus adapter (HBA, e.g. Advanced Host Controller Interface or AHDC). A disk controller allows a disk to talk to the same bus. Signals read by a disk read-and-write head are converted by a disk controller, then transmitted over the peripheral bus, then converted again by the host adapter into the suita" https://en.wikipedia.org/wiki/Automated%20ECG%20interpretation,"Automated ECG interpretation is the use of artificial intelligence and pattern recognition software and knowledge bases to carry out automatically the interpretation, test reporting, and computer-aided diagnosis of electrocardiogram tracings obtained usually from a patient. History The first automated ECG programs were developed in the 1970s, when digital ECG machines became possible by third-generation digital signal processing boards. Commercial models, such as those developed by Hewlett-Packard, incorporated these programs into clinically used devices. During the 1980s and 1990s, extensive research was carried out by companies and by university labs in order to improve the accuracy rate, which was not very high in the first models. For this purpose, several signal databases with normal and abnormal ECGs were built by institutions such as MIT and used to test the algorithms and their accuracy. Phases A digital representation of each recorded ECG channel is obtained, by means of an analog-to-digital converter and a special data acquisition software or a digital signal processing (DSP) chip. The resulting digital signal is processed by a series of specialized algorithms, which start by conditioning it, e.g., removal of noise, baselevel variation, etc. Feature extraction: mathematical analysis is now performed on the clean signal of all channels, to identify and measure a number of features which are important for interpretation and diagnosis, this will constitute the input to AI-based programs, such as the peak amplitude, area under the curve, displacement in relation to baseline, etc., of the P, Q, R, S and T waves, the time delay between these peaks and valleys, heart rate frequency (instantaneous and average), and many others. Some sort of secondary processing such as Fourier analysis and wavelet analysis may also be performed in order to provide input to pattern recognition-based programs. Logical processing and pattern recognition, using rule-based expe" https://en.wikipedia.org/wiki/Alexander%E2%80%93Hirschowitz%20theorem,"The Alexander–Hirschowitz theorem shows that a specific collection of double points in the will impose independent types of conditions on homogenous polynomials and the hypersurface of with many known lists of exceptions. In which case, the classic polynomial interpolation that is located in several variables can be generalized to points that have larger multiplicities." https://en.wikipedia.org/wiki/Software%20calculator,"A software calculator is a calculator that has been implemented as a computer program, rather than as a physical hardware device. They are among the simpler interactive software tools, and, as such, they provide operations for the user to select one at a time. They can be used to perform any process that consists of a sequence of steps each of which applies one of these operations, and have no purpose other than these processes, because the operations are the sole, or at least the primary, features of the calculator, rather than being secondary features that support other functionality that is not normally known simply as calculation. As a calculator, rather than a computer, they usually have a small set of relatively simple operations, perform short processes that are not compute intensive and do not accept large amounts of input data or produce many results. Platforms Software calculators are available for many different platforms, and they can be: A program for, or included with an operating system. A program implemented as server or client-side scripting (such as JavaScript) within a web page. Embedded in a calculator watch. Also complex software may have calculator-like dialogs, sometimes with the full calculator functionality, to enter data into the system. History Early years Computers as we know them today first emerged in the 1940s and 1950s. The software that they ran was naturally used to perform calculations, but it was specially designed for a substantial application that was not limited to simple calculations. For example, the LEO computer was designed to run business application software such as payroll. Software specifically to perform calculations as its main purpose was first written in the 1960s, and the first software package for general calculations to obtain widespread use was released in 1978. This was VisiCalc and it was called an interactive visible calculator, but it was actually a spreadsheet, and these are now not normally kno" https://en.wikipedia.org/wiki/Highly%20accelerated%20stress%20audit,"HASA (highly accelerated stress audit) is a proven test method developed to find manufacturing/production process induced defects in electronics and electro-mechanical assemblies before those products are released to market. HASA is a form of HASS (highly accelerated stress screening) – a powerful testing tool for improving product reliability, reducing warranty costs and increasing customer satisfaction. Since HASS levels are more aggressive than conventional screening tools, a POS procedure is used to establish the effectiveness in revealing production induced defects. A POS is vital to determine that the HASS stresses are capable of revealing production defects, but not so extreme as to remove significant life from the test item. Instituting HASS to screen the product is an excellent tool to maintain a high level of robustness and it will reduce the test time required to screen a product resulting in long term savings. Ongoing HASS screening assures that any weak components or manufacturing process degradations are quickly detected and corrected. HASS is not intended to be a rigid process that has an endpoint. It is a dynamic process that may need modification or adjustment over the life of the product. HASS aids in the detection of early life failures. HASA's primary purpose is to monitor manufacturing and prevent any defects from being introduced during the process. A carefully determined HASA sampling plan must be designed that will quickly signal when process quality has been degraded. External links cotsjournalonline.com – COTS Journal HALT/HASS Testing Goes Beyond the Norm Electronic engineering Quality management Environmental testing" https://en.wikipedia.org/wiki/Infraspecific%20name,"In botany, an infraspecific name is the scientific name for any taxon below the rank of species, i.e. an infraspecific taxon or infraspecies. A ""taxon"", plural ""taxa"", is a group of organisms to be given a particular name. The scientific names of botanical taxa are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICN). This specifies a three part name for infraspecific taxa, plus a connecting term to indicate the rank of the name. An example of such a name is Astrophytum myriostigma subvar. glabrum, the name of a subvariety of the species Astrophytum myriostigma (bishop's hat cactus). Names below the rank of species of cultivated kinds of plants and of animals are regulated by different codes of nomenclature and are formed somewhat differently. Construction of infraspecific names Article 24 of the ICN describes how infraspecific names are constructed. The order of the three parts of an infraspecific name is: genus name, specific epithet, connecting term indicating the rank (not part of the name, but required), infraspecific epithet. It is customary to italicize all three parts of such a name, but not the connecting term. For example: Acanthocalycium klimpelianum var. macranthum genus name = Acanthocalycium, specific epithet = klimpelianum, connecting term = var. (short for ""varietas"" or variety), infraspecific epithet = macranthum Astrophytum myriostigma subvar. glabrum genus name = Astrophytum, specific epithet = myriostigma, connecting term = subvar. (short for ""subvarietas"" or subvariety), infraspecific epithet = glabrum The recommended abbreviations for ranks below species are: subspecies - recommended abbreviation: subsp. (but ""ssp."" is also in use although not recognised by Art 26) varietas (variety) - recommended abbreviation: var. subvarietas (subvariety) - recommended abbreviation: subvar. forma (form) - recommended abbreviation: f. subforma (subform) - recommended abbreviation: subf. Although the connecting t" https://en.wikipedia.org/wiki/Coherence%20%28physics%29,"In physics, coherence expresses the potential for two waves to interfere. Two monochromatic beams from a single source always interfere. Physical sources are not strictly monochromatic: they may be partly coherent. Beams from different sources are mutually incoherent. When interfering, two waves add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of minima which may be zero (destructive interference), depending on their relative phase. Constructive or destructive interference are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable. Two waves with constant relative phase will be coherent. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions. More generally, coherence describes the statistical similarity of a field (electromagnetic field, quantum wave packet etc.) at two points in space or time. Qualitative concept Coherence controls the visibility or contrast of interference patterns. For example visibility of the double slit experiment pattern requires that both slits be illuminated by a coherent wave as illustrated in the figure. Large sources without collimation or sources that mix many different frequencies will have lower visibility. Coherence contains several distinct concepts. Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferomete" https://en.wikipedia.org/wiki/Degeneracy%20%28biology%29,"Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions (i.e. are effectively interchangeable) under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct. Note that this use of the term has practically no relevance to the questionably meaningful concept of evolutionarily degenerate populations that have lost ancestral functions. Biological examples Examples of degeneracy are found in the genetic code, when many different nucleotide sequences encode the same polypeptide; in protein folding, when different polypeptides fold to be structurally and functionally equivalent; in protein functions, when overlapping binding functions and similar catalytic specificities are observed; in metabolism, when multiple, parallel biosynthetic and catabolic pathways may coexist. More generally, degeneracy is observed in proteins of every functional class (e.g. enzymatic, structural, or regulatory), protein complex assemblies, ontogenesis, the nervous system, cell signalling (crosstalk) and numerous other biological contexts reviewed in. Contribution to robustness Degeneracy contributes to the robustness of biological traits through several mechanisms. Degenerate components compensate for one another under conditions where they are functionally redundant, thus providing robustness against component or pathway failure. Because degenerate components are somewhat different, they tend to harbor unique sensitivities so that a targeted attack such as a specific inhibitor is less likely to present a risk to all components at once. There are numerous biological examples where degeneracy con" https://en.wikipedia.org/wiki/Dolbear%27s%20law,"Dolbear's law states the relationship between the air temperature and the rate at which crickets chirp. It was formulated by Amos Dolbear and published in 1897 in an article called ""The Cricket as a Thermometer"". Dolbear's observations on the relation between chirp rate and temperature were preceded by an 1881 report by Margarette W. Brooks, although this paper went unnoticed until after Dolbear's publication. Dolbear did not specify the species of cricket which he observed, although subsequent researchers assumed it to be the snowy tree cricket, Oecanthus niveus. However, the snowy tree cricket was misidentified as O. niveus in early reports and the correct scientific name for this species is Oecanthus fultoni. The chirping of the more common field crickets is not as reliably correlated to temperature—their chirping rate varies depending on other factors such as age and mating success. In many cases, though, the Dolbear's formula is a close enough approximation for field crickets, too. Dolbear expressed the relationship as the following formula which provides a way to estimate the temperature in degrees Fahrenheit from the number of chirps per minute : This formula is accurate to within a degree or so when applied to the chirping of the field cricket. Counting can be sped up by simplifying the formula and counting the number of chirps produced in 15 seconds (): Reformulated to give the temperature in degrees Celsius (°C), it is: A shortcut method for degrees Celsius is to count the number of chirps in 8 seconds () and add 5 (this is fairly accurate between 5 and 30°C): The above formulae are expressed in terms of integers to make them easier to remember—they are not intended to be exact. In math classes Math textbooks will sometimes cite this as a simple example of where mathematical models break down, because at temperatures outside of the range that crickets live in, the total of chirps is zero as the crickets are dead. You can apply algebra to the e" https://en.wikipedia.org/wiki/Electronic%20badge,"An electronic badge (or electronic conference badge) is a gadget that is a replacement for a traditional paper-based badge or pass issued at public events. It is mainly handed out at computer (security) conferences and hacker events. Their main feature is to display the name of the attendee, but due to their electronic nature they can include a variety of software. The badges were originally a tradition at DEF CON, but spread across different events. Examples Hardware SHA2017 badge, which included an e-ink screen and an ESP32 Card10 for CCCamp2019 Electromagnetic Field Camp badge Software The organization badge.team has developed a platform called ""Hatchery"" to publish and develop software for several badges." https://en.wikipedia.org/wiki/Electronic%20color%20code,"An electronic color code or electronic colour code (see spelling differences) is used to indicate the values or ratings of electronic components, usually for resistors, but also for capacitors, inductors, diodes and others. A separate code, the 25-pair color code, is used to identify wires in some telecommunications cables. Different codes are used for wire leads on devices such as transformers or in building wiring. History Before industry standards were established, each manufacturer used its own unique system for color coding or marking their components. In the 1920s, the RMA resistor color code was developed by the Radio Manufacturers Association (RMA) as a fixed resistor coloring code marking. In 1930, the first radios with RMA color-coded resistors were built. Over many decades, as the organization name changed (RMA, RTMA, RETMA, EIA) so was the name of the code. Though known most recently as EIA color code, the four name variations are found in books, magazines, catalogs, and other documents over more than years. In 1952, it was standardized in IEC 62:1952 by the International Electrotechnical Commission (IEC) and since 1963 also published as EIA RS-279. Originally only meant to be used for fixed resistors, the color code was extended to also cover capacitors with IEC 62:1968. The code was adopted by many national standards like DIN 40825 (1973), BS 1852 (1974) and IS 8186 (1976). The current international standard defining marking codes for resistors and capacitors is IEC 60062:2016. In addition to the color code, these standards define a letter and digit code named RKM code for resistors and capacitors. Color bands were used because they were easily and cheaply printed on tiny components. However, there were drawbacks, especially for color blind people. Overheating of a component or dirt accumulation may make it impossible to distinguish brown from red or orange. Advances in printing technology have now made printed numbers more practical on small co" https://en.wikipedia.org/wiki/Fast-scan%20cyclic%20voltammetry,"Fast-scan cyclic voltammetry (FSCV) is cyclic voltammetry with a very high scan rate (up to ). Application of high scan rate allows rapid acquisition of a voltammogram within several milliseconds and ensures high temporal resolution of this electroanalytical technique. An acquisition rate of 10 Hz is routinely employed. FSCV in combination with carbon-fiber microelectrodes became a very popular method for detection of neurotransmitters, hormones and metabolites in biological systems. Initially, FSCV was successfully used for detection of electrochemically active biogenic amines release in chromaffin cells (adrenaline and noradrenaline), brain slices (5-HT, dopamine, norepinephrine) and in vivo in anesthetized or awake and behaving animals (dopamine). Further refinements of the method have enabled detection of 5-HT, HA, norepinephrine, adenosine, oxygen, pH changes in vivo in rats and mice as well as measurement of dopamine and serotonin concentration in fruit flies. Principles of FSCV In fast-scan cyclic voltammetry (FSCV), a small carbon fiber electrode (micrometer scale) is inserted into living cells, tissue, or extracellular space. The electrode is then used to quickly raise and lower the voltage in a triangular wave fashion. When the voltage is in the correct range (typically ±1 Volt) the compound of interest will be repeatedly oxidized and reduced. This will result in a movement of electrons in solution that will ultimately create a small alternating current (nano amps scale). By subtracting the background current created by the probe from the resulting current, it is possible to generate a voltage vs. current plot that is unique to each compound. Since the time scale of the voltage oscillations is known, this can then be used to calculate a plot of the current in solution as a function of time. The relative concentrations of the compound may be calculated as long as the number of electrons transferred in each oxidation and reduction reaction is known. " https://en.wikipedia.org/wiki/Leuckart%27s%20law,"Leuckart's law is an empirical law in zoology that states that the size of the eye of an animal is related to its maximum speed of movement; fast-moving animals have larger eyes, after allowing for the effects of body mass. The hypothesis dates from 1876, and in older literature is usually referred to as Leuckart's ratio. It was proposed by Rudolf Leuckart in 1876. The principle was initially applied to birds; it has also been applied to mammals. Criticism A study of 88 bird species, published in 2011, found no useful correlation between flight speed and eye size." https://en.wikipedia.org/wiki/RNDIS,"The Remote Network Driver Interface Specification (RNDIS) is a Microsoft proprietary protocol used mostly on top of USB. It provides a virtual Ethernet link to most versions of the Windows, Linux, and FreeBSD operating systems. Multiple revisions of a partial RNDIS specification are available from Microsoft, but Windows implementations have been observed to issue requests not included in that specification, and to have undocumented constraints. The protocol is tightly coupled to Microsoft's programming interfaces and models, most notably the Network Driver Interface Specification (NDIS), which are alien to operating systems other than Windows. This complicates implementing RNDIS on non-Microsoft operating systems, but Linux, FreeBSD, NetBSD and OpenBSD implement RNDIS natively. The USB Implementers Forum (USB-IF) defines at least three non-proprietary USB communications device class (USB CDC) protocols with comparable ""virtual Ethernet"" functionality; one of them (CDC-ECM) predates RNDIS and is widely used for interoperability with non-Microsoft operating systems, but does not work with Windows. Most versions of Android include RNDIS USB functionality. For example, Samsung smartphones have the capability and use RNDIS over USB to operate as a virtual Ethernet card that will connect the host PC to the mobile or Wi-Fi network in use by the phone, effectively working as a mobile broadband modem or a wireless card, for mobile hotspot tethering. Controversy In 2022 it was suggested that support for RNDIS should be removed from Linux, claiming that is inherently and uncorrectably insecure in the presence of untrusted USB devices. See also Ethernet over USB Qualcomm MSM Interface - A Qualcomm proprietary alternative" https://en.wikipedia.org/wiki/Problem%20solving%20environment,"A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results. Purpose of PSE Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design. History The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60. People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs. The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation. By 1990s, hypertext, point-and-click had moved towards inter-operability. Moving on, a ""Software Parts"" Industry finally existed. Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing " https://en.wikipedia.org/wiki/List%20of%20algebraic%20geometry%20topics,"This is a list of algebraic geometry topics, by Wikipedia page. Classical topics in projective geometry Affine space Projective space Projective line, cross-ratio Projective plane Line at infinity Complex projective plane Complex projective space Plane at infinity, hyperplane at infinity Projective frame Projective transformation Fundamental theorem of projective geometry Duality (projective geometry) Real projective plane Real projective space Segre embedding of a product of projective spaces Rational normal curve Algebraic curves Conics, Pascal's theorem, Brianchon's theorem Twisted cubic Elliptic curve, cubic curve Elliptic function, Jacobi's elliptic functions, Weierstrass's elliptic functions Elliptic integral Complex multiplication Weil pairing Hyperelliptic curve Klein quartic Modular curve Modular equation Modular function Modular group Supersingular primes Fermat curve Bézout's theorem Brill–Noether theory Genus (mathematics) Riemann surface Riemann–Hurwitz formula Riemann–Roch theorem Abelian integral Differential of the first kind Jacobian variety Generalized Jacobian Moduli of algebraic curves Hurwitz's theorem on automorphisms of a curve Clifford's theorem on special divisors Gonality of an algebraic curve Weil reciprocity law Algebraic geometry codes Algebraic surfaces Enriques–Kodaira classification List of algebraic surfaces Ruled surface Cubic surface Veronese surface Del Pezzo surface Rational surface Enriques surface K3 surface Hodge index theorem Elliptic surface Surface of general type Zariski surface Algebraic geometry: classical approach Algebraic variety Hypersurface Quadric (algebraic geometry) Dimension of an algebraic variety Hilbert's Nullstellensatz Complete variety Elimination theory Gröbner basis Projective variety Quasiprojective variety Canonical bundle Complete intersection Serre duality Spaltenstein variety Arithmetic genus, geometric genus, irregularity Tangent space, Zariski tangent space Function field of an algebraic variet" https://en.wikipedia.org/wiki/Mathematical%20Alphanumeric%20Symbols,"Mathematical Alphanumeric Symbols is a Unicode block comprising styled forms of Latin and Greek letters and decimal digits that enable mathematicians to denote different notions with different letter styles. The letters in various fonts often have specific, fixed meanings in particular areas of mathematics. By providing uniformity over numerous mathematical articles and books, these conventions help to read mathematical formulas. These also may be used to differentiate between concepts that share a letter in a single problem. Unicode now includes many such symbols (in the range U+1D400–U+1D7FF). The rationale behind this is that it enables design and usage of special mathematical characters (fonts) that include all necessary properties to differentiate from other alphanumerics, e.g. in mathematics an italic ""𝐴"" can have a different meaning from a roman letter ""A"". Unicode originally included a limited set of such letter forms in its Letterlike Symbols block before completing the set of Latin and Greek letter forms in this block beginning in version 3.1. Unicode expressly recommends that these characters not be used in general text as a substitute for presentational markup; the letters are specifically designed to be semantically different from each other. Unicode does include a set of normal serif letters in the set. Still they have found some usage on social media, for example by people who want a stylized user name, and in email spam, in an attempt to bypass filters. All these letter shapes may be manipulated with MathML's attribute mathvariant. The introduction date of some of the more commonly used symbols can be found in the Table of mathematical symbols by introduction date. Tables of styled letters and digits These tables show all styled forms of Latin and Greek letters, symbols and digits in the Unicode Standard, with the normal unstyled forms of these characters shown with a cyan background (the basic unstyled letters may be serif or sans-serif depen" https://en.wikipedia.org/wiki/List%20of%20complex%20and%20algebraic%20surfaces,"This is a list of named algebraic surfaces, compact complex surfaces, and families thereof, sorted according to their Kodaira dimension following Enriques–Kodaira classification. Kodaira dimension −∞ Rational surfaces Projective plane Quadric surfaces Cone (geometry) Cylinder Ellipsoid Hyperboloid Paraboloid Sphere Spheroid Rational cubic surfaces Cayley nodal cubic surface, a certain cubic surface with 4 nodes Cayley's ruled cubic surface Clebsch surface or Klein icosahedral surface Fermat cubic Monkey saddle Parabolic conoid Plücker's conoid Whitney umbrella Rational quartic surfaces Châtelet surfaces Dupin cyclides, inversions of a cylinder, torus, or double cone in a sphere Gabriel's horn Right circular conoid Roman surface or Steiner surface, a realization of the real projective plane in real affine space Tori, surfaces of revolution generated by a circle about a coplanar axis Other rational surfaces in space Boy's surface, a sextic realization of the real projective plane in real affine space Enneper surface, a nonic minimal surface Henneberg surface, a minimal surface of degree 15 Bour's minimal surface, a surface of degree 16 Richmond surfaces, a family of minimal surfaces of variable degree Other families of rational surfaces Coble surfaces Del Pezzo surfaces, surfaces with an ample anticanonical divisor Hirzebruch surfaces, rational ruled surfaces Segre surfaces, intersections of two quadrics in projective 4-space Unirational surfaces of characteristic 0 Veronese surface, the Veronese embedding of the projective plane into projective 5-space White surfaces, the blow-up of the projective plane at points by the linear system of degree- curves through those points Bordiga surfaces, the White surfaces determined by families of quartic curves Non-rational ruled surfaces Class VII surfaces Vanishing second Betti number: Hopf surfaces Inoue surfaces; several other families discovered by Inoue have also been called """ https://en.wikipedia.org/wiki/Chipset,"In a computer system, a chipset is a set of electronic components on one or more ULSI integrated circuits known as a ""Data Flow Management System"" that manages the data flow between the processor, memory and peripherals. It is usually found on the motherboard of computers. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance. Computers In computing, the term chipset commonly refers to a set of specialized chips on a computer's motherboard or an expansion card. In personal computers, the first chipset for the IBM PC AT of 1984 was the NEAT chipset developed by Chips and Technologies for the Intel 80286 CPU. In home computers, game consoles, and arcade hardware of the 1980s and 1990s, the term chipset was used for the custom audio and graphics chips. Examples include the Original Amiga chipset and Sega's System 16 chipset. In x86-based personal computers, the term chipset often refers to a specific pair of chips on the motherboard: the northbridge and the southbridge. The northbridge links the CPU to very high-speed devices, especially RAM and graphics controllers, and the southbridge connects to lower-speed peripheral buses (such as PCI or ISA). In many modern chipsets, the southbridge contains some on-chip integrated peripherals, such as Ethernet, USB, and audio devices. Motherboards and their chipsets often come from different manufacturers. , manufacturers of chipsets for x86 motherboards include AMD, Intel, VIA Technologies and Zhaoxin. In the 1990s, a major designer and manufacturer of chipsets was VLSI Technology in Tempe, Arizona. The early Apple Power Macintosh PCs (that used the Motorola 68030 and 68040) had chipsets from VLSI Technology. Some of their innovations included the integration of PCI bridge logic, the GraphiCore 2D graphics accelerator and direct support for synchronous " https://en.wikipedia.org/wiki/Relativistic%20heat%20conduction,"Relativistic heat conduction refers to the modelling of heat conduction (and similar diffusion processes) in a way compatible with special relativity. In special (and general) relativity, the usual heat equation for non-relativistic heat conduction must be modified, as it leads to faster-than-light signal propagation. Relativistic heat conduction, therefore, encompasses a set of models for heat propagation in continuous media (solids, fluids, gases) that are consistent with relativistic causality, namely the principle that an effect must be within the light-cone associated to its cause. Any reasonable relativistic model for heat conduction must also be stable, in the sense that differences in temperature propagate both slower than light and are damped over time (this stability property is intimately intertwined with relativistic causality). Parabolic model (non-relativistic) Heat conduction in a Newtonian context is modelled by the Fourier equation, namely a parabolic partial differential equation of the kind: where θ is temperature, t is time, α = k/(ρ c) is thermal diffusivity, k is thermal conductivity, ρ is density, and c is specific heat capacity. The Laplace operator,, is defined in Cartesian coordinates as This Fourier equation can be derived by substituting Fourier’s linear approximation of the heat flux vector, q, as a function of temperature gradient, into the first law of thermodynamics where the del operator, ∇, is defined in 3D as It can be shown that this definition of the heat flux vector also satisfies the second law of thermodynamics, where s is specific entropy and σ is entropy production. This mathematical model is inconsistent with special relativity: the Green function associated to the heat equation (also known as heat kernel) has support that extends outside the light-cone, leading to faster-than-light propagation of information. For example, consider a pulse of heat at the origin; then according to Fourier equation, it is felt (i." https://en.wikipedia.org/wiki/Dynamic%20bandwidth%20allocation,"Dynamic bandwidth allocation is a technique by which traffic bandwidth in a shared telecommunications medium can be allocated on demand and fairly between different users of that bandwidth. This is a form of bandwidth management, and is essentially the same thing as statistical multiplexing. Where the sharing of a link adapts in some way to the instantaneous traffic demands of the nodes connected to the link. Dynamic bandwidth allocation takes advantage of several attributes of shared networks: all users are typically not connected to the network at one time even when connected, users are not transmitting data (or voice or video) at all times most traffic occurs in bursts—there are gaps between packets of information that can be filled with other user traffic Different network protocols implement dynamic bandwidth allocation in different ways. These methods are typically defined in standards developed by standards bodies such as the ITU, IEEE, FSAN, or IETF. One example is defined in the ITU G.983 specification for passive optical network (PON). See also Statistical multiplexing Channel access method Dynamic channel allocation Reservation ALOHA (R-ALOHA) Telecommunications techniques Computer networking Radio resource management" https://en.wikipedia.org/wiki/Substrate%20presentation,"Substrate presentation is a biological process that activates a protein. The protein is sequestered away from its substrate and then activated by release and exposure of the protein to its substrate. A substrate is typically the substance on which an enzyme acts but can also be a protein surface to which a ligand binds. The substrate is the material acted upon. In the case of an interaction with an enzyme, the protein or organic substrate typically changes chemical form. Substrate presentation differs from allosteric regulation in that the enzyme need not change its conformation to begin catalysis. Substrate presentation is best described for nanoscopic distances (<100 nm). Examples Amyloid Precursor Protein Amyloid precursor protein (APP) is cleaved by beta and gamma secretase to yield a 40-42 amino acid peptide responsible for beta amyloid plaques associated with Alzheimer's disease. The enzymes are regulated by substrate presentation. The substrate APP is palmitoylated and moves in and out of GM1 lipid rafts in response to astrocyte cholesterol. Cholesterol delivered by apolipoprotein E (ApoE) drives APP to associate with GM1 lipid rafts. When cholesterol is low, the protein traffics to the disordered region and is cleaved by alpha secretase to produce a non-amylogenic product. The enzymes do not appear to respond to cholesterol, only the substrate moves. Hydrophobicity drives the partitioning of molecules. In the cell, this gives rise to compartmentalization within the cell and within cell membranes. For lipid rafts, palmitoylation regulates raft affinity for the majority of integral raft proteins. Raft regulation is regulated by cholesterol signaling. Phospholipase D2 (PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to GM1 lipid domains or ""lipid rafts"". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in li" https://en.wikipedia.org/wiki/Diskless%20shared-root%20cluster,"A diskless shared-root cluster is a way to manage several machines at the same time. Instead of each having its own operating system (OS) on its local disk, there is only one image of the OS available on a server, and all the nodes use the same image. (SSI cluster = single-system image) The simplest way to achieve this is to use a NFS server, configured to host the generic boot image for the SSI cluster nodes. (pxe + dhcp + tftp + nfs) To ensure that there is no single point of failure, the NFS export for the boot-image should be hosted on a two node cluster. The architecture of a diskless computer cluster makes it possible to separate servers and storage array. The operating system as well as the actual reference data (userfiles, databases or websites) are stored competitively on the attached storage system in a centralized manner. Any server that acts as a cluster node can be easily exchanged by demand. The additional abstraction layer between storage system and computing power eases the scale out of the infrastructure. Most notably the storage capacity, the computing power and the network bandwidth can be scaled independent from one another. A similar technology can be found in VMScluster (OpenVMS) and TruCluster (Tru64 UNIX). The open-source implementation of a diskless shared-root cluster is known as Open-Sharedroot. Literature Marc Grimme, Mark Hlawatschek, Thomas Merz: Data sharing with a Red Hat GFS storage cluster Marc Grimme, Mark Hlawatschek German Whitepaper: Der Diskless Shared-root Cluster (PDF-Datei; 1,1 MB) Kenneth W. Preslan: Red Hat GFS 6.1 – Administrator’s Guide" https://en.wikipedia.org/wiki/Euler%20product,"In number theory, an Euler product is an expansion of a Dirichlet series into an infinite product indexed by prime numbers. The original such product was given for the sum of all positive integers raised to a certain power as proven by Leonhard Euler. This series and its continuation to the entire complex plane would later become known as the Riemann zeta function. Definition In general, if is a bounded multiplicative function, then the Dirichlet series is equal to where the product is taken over prime numbers , and is the sum In fact, if we consider these as formal generating functions, the existence of such a formal Euler product expansion is a necessary and sufficient condition that be multiplicative: this says exactly that is the product of the whenever factors as the product of the powers of distinct primes . An important special case is that in which is totally multiplicative, so that is a geometric series. Then as is the case for the Riemann zeta function, where , and more generally for Dirichlet characters. Convergence In practice all the important cases are such that the infinite series and infinite product expansions are absolutely convergent in some region that is, in some right half-plane in the complex numbers. This already gives some information, since the infinite product, to converge, must give a non-zero value; hence the function given by the infinite series is not zero in such a half-plane. In the theory of modular forms it is typical to have Euler products with quadratic polynomials in the denominator here. The general Langlands philosophy includes a comparable explanation of the connection of polynomials of degree , and the representation theory for . Examples The following examples will use the notation for the set of all primes, that is: The Euler product attached to the Riemann zeta function , also using the sum of the geometric series, is while for the Liouville function , it is Using their reciprocals, two Euler produc" https://en.wikipedia.org/wiki/DNA%20laddering,"DNA laddering is a feature that can be observed when DNA fragments, resulting from Apoptosis DNA fragmentation are visualized after separation by gel electrophoresis the first described in 1980 by Andrew Wyllie at the University Edinburgh medical school DNA fragments can also be delected in cells that underwent necrosis, when theses DNA fragments after separation are subjected to gel electrophoresis which in the results in a characteristic ladder pattern, DNA degradation DNA laddering is a distinctive feature of DNA degraded by caspase-activated DNase (CAD), which is a key event during apoptosis. CAD cleaves genomic DNA at internucleosomal linker regions, resulting in DNA fragments that are multiples of 180–185 base-pairs in length. Separation of the fragments by agarose gel electrophoresis and subsequent visualization, for example by ethidium bromide staining, results in a characteristic ""ladder"" pattern. A simple method of selective extraction of fragmented DNA from apoptotic cells without the presence of high molecular weight DNA sections, generating the laddering pattern, utilizes pretreatment of cells in ethanol. Apoptosis and necrosis While most of the morphological features of apoptotic cells are short-lived, DNA laddering can be used as final state read-out method and has therefore become a reliable method to distinguish apoptosis from necrosis. DNA laddering can also be used to see if cells underwent apoptosis in the presence of a virus. This is useful because it can help determine the effects a virus has on a cell. DNA laddering can only be used to detect apoptosis during the later stages of apoptosis. This is due to DNA fragmentation taking place in a later stage of the apoptosis process. DNA laddering is used to test for apoptosis of many cells, and is not accurate at testing for only a few cells that committed apoptosis. To enhance the accuracy in testing for apoptosis, other assays are used along with DNA laddering such as TEM and TUNEL. With recen" https://en.wikipedia.org/wiki/FTOS,"FTOS or Force10 Operating System is the firmware family used on Force10 Ethernet switches. It has a similar functionality as Cisco's NX-OS or Juniper's Junos. FTOS 10 is running on Debian. As part of a re-branding strategy of Dell FTOS will be renamed to Dell Networking Operating System (DNOS) 9.x or above, while the legacy PowerConnect switches will use DNOS 6.x: see the separate article on DNOS. Hardware Abstraction Layer Three of the four product families from Dell Force10 are using the Broadcom Trident+ ASIC's, but the company doesn't use the API's from Broadcom: the developers at Force10 have written their own Hardware Abstraction Layer so that FTOS can run on different hardware platforms with minimal impact for the firmware. Currently three of the four F10 switch families are based on the Broadcom Trident+ (while the fourth—the E-series—run on self-developed ASIC's); and if the product developers want or need to use different hardware for new products they only need to develop a HAL for that new hardware and the same firmware can run on it. This keeps the company flexible and not dependent on a specific hardware-vendor and can use both 3rd party or self designed ASIC's and chipsets. The human interface in FTOS, that is the way network-administrators can configure and monitor their switches, is based on NetBSD, an implementation which often used in embedded networking-systems. NetBSD is a very stable, open source, OS running on many different hardware platforms. By choosing for a proven technology with extended TCP functionality built into the core of the OS it reduces time during development of new products or extending the FTOS with new features. Modular setup FTOS is also modular where different parts of the OS run independently from each other within one switch: if one process would fail the impact on other processes on the switch are limited. This modular setup is also taken to the hardware level in some product-lines where a routing-module has three se" https://en.wikipedia.org/wiki/2%20%2B%202%20%3D%205,"""Two plus two equals five"" (2 + 2 = 5) is a mathematically incorrect phrase used in the 1949 dystopian novel Nineteen Eighty-Four by George Orwell. It appears as a possible statement of Ingsoc (English Socialism) philosophy, like the dogma ""War is Peace"", which the Party expects the citizens of Oceania to believe is true. In writing his secret diary in the year 1984, the protagonist Winston Smith ponders if the Inner Party might declare that ""two plus two equals five"" is a fact. Smith further ponders whether or not belief in such a consensus reality makes the lie true. About the falsity of ""two plus two equals five"", in the Ministry of Love, the interrogator O'Brien tells the thought criminal Smith that control over physical reality is unimportant to the Party, provided the citizens of Oceania subordinate their real-world perceptions to the political will of the Party; and that, by way of doublethink: ""Sometimes, Winston. [Sometimes it is four fingers.] Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once"". As a theme and as a subject in the arts, the anti-intellectual slogan 2 + 2 = 5 pre-dates Orwell and has produced literature, such as Deux et deux font cinq (Two and Two Make Five), written in 1895 by Alphonse Allais, which is a collection of absurdist short stories; and the 1920 imagist art manifesto 2 × 2 = 5 by the poet Vadim Shershenevich, in the 20th century. Self-evident truth and self-evident falsehood In the 17th century, in the Meditations on First Philosophy, in which the Existence of God and the Immortality of the Soul are Demonstrated (1641), René Descartes said that the standard of truth is self-evidence of clear and distinct ideas. Despite the logician Descartes' understanding of ""self-evident truth"", the philosopher Descartes considered that the self-evident truth of ""two plus two equals four"" might not exist beyond the human mind; that there might not exist correspondence between abstract ideas and concret" https://en.wikipedia.org/wiki/Stamped%20circuit%20board,"A stamped circuit board (SCB) is used to mechanically support and electrically connect electronic components using conductive pathways, tracks or traces etched from copper sheets laminated onto a non-conductive substrate. This technology is used for small circuits, for instance in the production of LEDs. Similar to printed circuit boards this layer structure may comprise glass-fibre reinforced epoxy resin and copper. Basically, in the case of LED substrates three variations are possible: the PCB (printed circuit board), plastic-injection molding and the SCB. Using the SCB technology it is possible to structure and laminate the most widely differing material combinations in a reel-to-reel production process. As the layers are structured separately, improved design concepts are able to be implemented. Consequently, a far better and quicker heat dissipation from within the chip is achieved. Production Both the plastic and the metal are initially processed on separate reels, .i.e. in accordance with the requirements the materials are individually structured by stamping (“brought into form“) and then merged. Advantages The engineering respectively choice of substrates actually comes down to the particular application, module design/substrate assembly, material and thickness of the material involved. Taking these parameters it is possible to attain a good thermal management by using SCB technology, because rapid heat dissipation from beneath the chip means a longer service life for the system. Furthermore, SCB technology allows the material to be chosen to correspond to the pertinent requirements and then to optimize the design to arrive at a “perfect fit”." https://en.wikipedia.org/wiki/Factorial%20code,"Most real world data sets consist of data vectors whose individual components are not statistically independent. In other words, knowing the value of an element will provide information about the value of elements in the data vector. When this occurs, it can be desirable to create a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent. Later supervised learning usually works much better when the raw input data is first translated into such a factorial code. For example, suppose the final goal is to classify images with highly redundant pixels. A naive Bayes classifier will assume the pixels are statistically independent random variables and therefore fail to produce good results. If the data are first encoded in a factorial way, however, then the naive Bayes classifier will achieve its optimal performance (compare Schmidhuber et al. 1996). To create factorial codes, Horace Barlow and co-workers suggested to minimize the sum of the bit entropies of the code components of binary codes (1989). Jürgen Schmidhuber (1992) re-formulated the problem in terms of predictors and binary feature detectors, each receiving the raw data as an input. For each detector there is a predictor that sees the other detectors and learns to predict the output of its own detector in response to the various input vectors or images. But each detector uses a machine learning algorithm to become as unpredictable as possible. The global optimum of this objective function corresponds to a factorial code represented in a distributed fashion across the outputs of the feature detectors. Painsky, Rosset and Feder (2016, 2017) further studied this problem in the context of independent component analysis over finite alphabet sizes. Through a series of theorems they show that the factorial coding problem can be accurately solved" https://en.wikipedia.org/wiki/Promiscuous%20traffic,"In computer networking, promiscuous traffic, or cross-talking, describes situations where a receiver configured to receive a particular data stream receives that data stream and others. Promiscuous traffic should not be confused with the promiscuous mode, which is a network card configuration. In particular, in multicast socket networking, an example of promiscuous traffic is when a socket configured to listen on a specific multicast address group A with a specific port P, noted A:P, receives traffic from A:P but also from another multicast source. For instance, a socket is configured to receive traffic from the multicast group address 234.234.7.70, port 36000 (noted 234.234.7.70:36000), but receives traffic from both 234.234.7.70:36000 and 234.234.7.71:36000. This type of promiscuous traffic, due to a lack of address filtering, has been a recurring issue with certain Unix and Linux kernels, but has never been reported on Microsoft Windows operating systems post Windows XP. Another form of promiscuous traffic occurs when two different applications happen to listen on the same group address. As the former type of promiscuous traffic (lack of address filtering) can be considered a bug at the operating system level, the latter reflects global configuration issues." https://en.wikipedia.org/wiki/Classification%20of%20Clifford%20algebras,"In abstract algebra, in particular in the theory of nondegenerate quadratic forms on vector spaces, the structures of finite-dimensional real and complex Clifford algebras for a nondegenerate quadratic form have been completely classified. In each case, the Clifford algebra is algebra isomorphic to a full matrix ring over R, C, or H (the quaternions), or to a direct sum of two copies of such an algebra, though not in a canonical way. Below it is shown that distinct Clifford algebras may be algebra-isomorphic, as is the case of Cl1,1(R) and Cl2,0(R), which are both isomorphic as rings to the ring of two-by-two matrices over the real numbers. The significance of this result is that the additional structure on a Clifford algebra relative to the ""underlying"" associative algebra — namely, the structure given by the grade involution automorphism and reversal anti-automorphism (and their composition, the Clifford conjugation) — is in general an essential part of its definition, not a procedural artifact of its construction as the quotient of a tensor algebra by an ideal. The category of Clifford algebras is not just a selection from the category of matrix rings, picking out those in which the ring product can be constructed as the Clifford product for some vector space and quadratic form. With few exceptions, ""forgetting"" the additional structure (in the category theory sense of a forgetful functor) is not reversible. Continuing the example above: Cl1,1(R) and Cl2,0(R) share the same associative algebra structure, isomorphic to (and commonly denoted as) the matrix algebra M2(R). But they are distinguished by different choices of grade involution — of which two-dimensional subring, closed under the ring product, to designate as the even subring — and therefore of which of the various anti-automorphisms of M2(R) can accurately represent the reversal anti-automorphism of the Clifford algebra. These distinguished (anti-)automorphisms are structures on the tensor algebra whi" https://en.wikipedia.org/wiki/Electronic%20oscillation,"Electronic oscillation is a repeating cyclical variation in voltage or current in an electrical circuit, resulting in a periodic waveform. The frequency of the oscillation in hertz is the number of times the cycle repeats per second. The recurrence may be in the form of a varying voltage or a varying current. The waveform may be sinusoidal or some other shape when its magnitude is plotted against time. Electronic oscillation may be intentionally caused, as in devices designed as oscillators, or it may be the result of unintentional positive feedback from the output of an electronic device to its input. The latter appears often in feedback amplifiers (such as operational amplifiers) that do not have sufficient gain or phase margins. In this case, the oscillation often interferes with or compromises the amplifier's intended function, and is known as parasitic oscillation." https://en.wikipedia.org/wiki/Automatic%20system%20recovery,"Automatic system recovery is a device or process that detects a computer failure and attempts recovery. The device may make use of a Watchdog timer. This may also refer to a Microsoft recovery technology by the same name. External links HP ProLiant, Blade - Automatic Server Recovery (ASR) archivied from original on archive.org How does ASR (Automatic System Recovery) detect server hang? Embedded systems" https://en.wikipedia.org/wiki/Point-set%20registration,"In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging. As a special case, registration of two point sets that only differ by a 3D rotation (i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem. Formulation The problem may be summarized as follows: Let be two finite size point sets in a finite-dimensional real vector space , which contain and points respectively (e.g., recovers the typical case of when and are 3D point sets). The problem is to find a transformation to be applied to the moving ""model"" point set such that the difference (typically defined in the sense of point-wise Euclidean distance) between and the static ""scene"" set is minimized. In other words, a mapping from to is desired which yie" https://en.wikipedia.org/wiki/List%20of%20curves%20topics,"This is an alphabetical index of articles related to curves used in mathematics. Acnode Algebraic curve Arc Asymptote Asymptotic curve Barbier's theorem Bézier curve Bézout's theorem Birch and Swinnerton-Dyer conjecture Bitangent Bitangents of a quartic Cartesian coordinate system Caustic Cesàro equation Chord (geometry) Cissoid Circumference Closed timelike curve concavity Conchoid (mathematics) Confocal Contact (mathematics) Contour line Crunode Cubic Hermite curve Curvature Curve orientation Curve fitting Curve-fitting compaction Curve of constant width Curve of pursuit Curves in differential geometry Cusp Cyclogon De Boor algorithm Differential geometry of curves Eccentricity (mathematics) Elliptic curve cryptography Envelope (mathematics) Fenchel's theorem Genus (mathematics) Geodesic Geometric genus Great-circle distance Harmonograph Hedgehog (curve) Hilbert's sixteenth problem Hyperelliptic curve cryptography Inflection point Inscribed square problem intercept, y-intercept, x-intercept Intersection number Intrinsic equation Isoperimetric inequality Jordan curve Jordan curve theorem Knot Limit cycle Linking coefficient List of circle topics Loop (knot) M-curve Mannheim curve Meander (mathematics) Mordell conjecture Natural representation Opisometer Orbital elements Osculating circle Osculating plane Osgood curve Parallel (curve) Parallel transport Parametric curve Bézier curve Spline (mathematics) Hermite spline Beta spline B-spline Higher-order spline NURBS Perimeter Pi Plane curve Pochhammer contour Polar coordinate system Prime geodesic Projective line Ray Regular parametric representation Reuleaux triangle Ribaucour curve Riemann–Hurwitz formula Riemann–Roch theorem Riemann surface Road curve Sato–Tate conjecture secant Singular solution Sinuosity Slope Space curve Spinode Square wheel Subtangent Tacnode Tangent Tangent space Tangential angle Tor" https://en.wikipedia.org/wiki/Serotype,"A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms, viruses, or cells are classified together based on their surface antigens, allowing the epidemiologic classification of organisms to the subspecies level. A group of serovars with common antigens is called a serogroup or sometimes serocomplex. Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae, the species of bacteria that causes cholera, has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139. Serotypes were discovered by the American microbiologist Rebecca Lancefield in 1933. Role in organ transplantation The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex. Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination. Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection. In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match. Human leukocyte antigens Serotyping of Salmonella The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella. To date, more" https://en.wikipedia.org/wiki/Security%20of%20the%20Java%20software%20platform,"The Java platform provides a number of features designed for improving the security of Java applications. This includes enforcing runtime constraints through the use of the Java Virtual Machine (JVM), a security manager that sandboxes untrusted code from the rest of the operating system, and a suite of security APIs that Java developers can utilise. Despite this, criticism has been directed at the programming language, and Oracle, due to an increase in malicious programs that revealed security vulnerabilities in the JVM, which were subsequently not properly addressed by Oracle in a timely manner. Security features The JVM The binary form of programs running on the Java platform is not native machine code but an intermediate bytecode. The JVM performs verification on this bytecode before running it to prevent the program from performing unsafe operations such as branching to incorrect locations, which may contain data rather than instructions. It also allows the JVM to enforce runtime constraints such as array bounds checking. This means that Java programs are significantly less likely to suffer from memory safety flaws such as buffer overflow than programs written in languages such as C which do not provide such memory safety guarantees. The platform does not allow programs to perform certain potentially unsafe operations such as pointer arithmetic or unchecked type casts. It manages memory allocation and initialization and provides automatic garbage collection which in many cases (but not all) relieves the developer from manual memory management. This contributes to type safety and memory safety. Security manager The platform provides a security manager which allows users to run untrusted bytecode in a ""sandboxed"" environment designed to protect them from malicious or poorly written software by preventing the untrusted code from accessing certain platform features and APIs. For example, untrusted code might be prevented from reading or writing files on the loca" https://en.wikipedia.org/wiki/Foliicolous,"Foliicolous refers to the growth habit of certain lichens, algae, and fungi that prefer to grow on the leaves of vascular plants. There have been about 700 species of foliicolous lichens identified, most of which are found in the tropics." https://en.wikipedia.org/wiki/Digital%20storage%20oscilloscope,"A digital storage oscilloscope (DSO) is an oscilloscope which stores and analyses the input signal digitally rather than using analog techniques. It is now the most common type of oscilloscope in use because of the advanced trigger, storage, display and measurement features which it typically provides. The input analogue signal is sampled and then converted into a digital record of the amplitude of the signal at each sample time. The sampling frequency should be not less than the Nyquist rate to avoid aliasing. These digital values are then turned back into an analogue signal for display on a cathode ray tube (CRT), or transformed as needed for the various possible types of output—liquid crystal display, chart recorder, plotter or network interface. Digital storage oscilloscope costs vary widely; bench-top self-contained instruments (complete with displays) start at or even less, with high-performance models selling for tens of thousands of dollars. Small, pocket-size models, limited in function, may retail for as little as US$50. Comparison with analog storage The principal advantage over analog storage is that the stored traces are as bright, as sharply defined, and written as quickly as non-stored traces. Traces can be stored indefinitely or written out to some external data storage device and reloaded. This allows, for example, comparison of an acquired trace from a system under test with a standard trace acquired from a known-good system. Many models can display the waveform prior to the trigger signal. Digital oscilloscopes usually analyze waveforms and provide numerical values as well as visual displays. These values typically include averages, maxima and minima, root mean square (RMS) and frequencies. They may be used to capture transient signals when operated in a single sweep mode, without the brightness and writing speed limitations of an analog storage oscilloscope. The displayed trace can be manipulated after acquisition; a portion of t" https://en.wikipedia.org/wiki/Tertiary%20review,"In software engineering, a tertiary review is a systematic review of systematic reviews. It is also referred to as a tertiary study in the software engineering literature. However, Umbrella review is the term more commonly used in medicine. Kitchenham et al. suggest that methodologically there is no difference between a systematic review and a tertiary review. However, as the software engineering community has started performing tertiary reviews new concerns unique to tertiary reviews have surfaced. These include the challenge of quality assessment of systematic reviews, search validation and the additional risk of double counting. Examples of Tertiary reviews in software engineering literature Test quality Machine Learning Test-driven development" https://en.wikipedia.org/wiki/BiCMOS,"Bipolar CMOS (BiCMOS) is a semiconductor technology that integrates two semiconductor technologies, those of the bipolar junction transistor and the CMOS (complementary metal–oxide–semiconductor) logic gate, into a single integrated circuit. In more recent times the bipolar processes have been extended to include high mobility devices using silicon–germanium junctions. Bipolar transistors offer high speed, high gain, and low output impedance with relatively high power consumption per device, which are excellent properties for high-frequency analog amplifiers including low noise radio frequency (RF) amplifiers that only use a few active devices, while CMOS technology offers high input impedance and is excellent for constructing large numbers of low-power logic gates. In a BiCMOS process the doping profile and other process features may be tilted to favour either the CMOS or the bipolar devices. For example GlobalFoundries offer a basic 180 nm BiCMOS7WL process and several other BiCMOS processes optimized in various ways. These processes also include steps for the deposition of precision resistors, and high Q RF inductors and capacitors on-chip, which are not needed in a ""pure"" CMOS logic design. BiCMOS is aimed at mixed-signal ICs, such as ADCs and complete software radio systems on a chip that need amplifiers, analog power management circuits, and logic gates on chip. BiCMOS has some advantages in providing digital interfaces. BiCMOS circuits use the characteristics of each type of transistor most appropriately. Generally this means that high current circuits such as on chip power regulators use metal–oxide–semiconductor field-effect transistors (MOSFETs) for efficient control, and 'sea of logic' use conventional CMOS structures, while those portions of specialized very high performance circuits such as ECL dividers and LNAs use bipolar devices. Examples include RF oscillators, bandgap-based references and low-noise circuits. The Pentium, Pentium Pro, and SuperS" https://en.wikipedia.org/wiki/Delay%20equalization,"In signal processing, delay equalization corresponds to adjusting the relative phases of different frequencies to achieve a constant group delay, using by adding an all-pass filter in series with an uncompensated filter. Clever machine-learning techniques are now being applied to the design of such filters." https://en.wikipedia.org/wiki/Oscillator%20start-up%20timer,"An oscillator start-up timer (OST) is a module used by some microcontrollers to keep the device reset until the crystal oscillator is stable. When a crystal oscillator starts up, its frequency is not constant, which causes the clock frequency to be non-constant. This would cause timing errors, leading to many problems. An oscillator start-up timer ensures that the device only operates when the oscillator generates a stable clock frequency. The PIC microcontroller's oscillator start-up timer holds the device's reset for a 1024-oscillator-cycle delay to allow the oscillator to stabilize. See also Power-on reset Brown-out reset Watchdog timer Low-voltage detect" https://en.wikipedia.org/wiki/Content-addressable%20memory,"Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored data, and returns the address of matching data. CAM is frequently used in networking devices where it speeds up forwarding information base and routing table operations. This kind of associative memory is also used in cache memory. In associative cache memory, both address and content is stored side by side. When the address matches, the corresponding content is fetched from cache memory. History Dudley Allen Buck invented the concept of content-addressable memory in 1955. Buck is credited with the idea of recognition unit. Hardware associative array Unlike standard computer memory, random-access memory (RAM), in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found. Thus, a CAM is the hardware embodiment of what in software terms would be called an associative array. A similar concept can be found in the data word recognition unit, as proposed by Dudley Allen Buck in 1955. Standards A major interface definition for CAMs and other network search engines was specified in an interoperability agreement called the Look-Aside Interface (LA-1 and LA-1B) developed by the Network Processing Forum. Numerous devices conforming to the interoperability agreement have been produced by Integrated Device Technology, Cypress Semiconductor, IBM, Broadcom and others. On December 11, 2007, the OIF published the serial look-aside (SLA) interface agreement. Semiconductor implementations CAM is much faster than RAM in data search applications" https://en.wikipedia.org/wiki/Fermentation,"Fermentation is a metabolic process that produces chemical changes in organic substances through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage. The science of fermentation is known as zymology. In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically. Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans. Industrial fermentation is a broader term used for the process of applying microbes for the large-scale production of chemicals, biofuels, enzymes, proteins and pharmaceuticals. Definitions and etymology Below are some definitions of fermentation ranging from informal, general usages to more scientific definitions. Preservation methods for food via microorganisms (general use). Any large-scale microbial process occurring with or without air (common definition used in industry, also known as industrial fermentation). Any process that produces alcoholic beverages or acidic dairy products (general use). Any energy-releasing metabolic process that takes place only under anaerobic conditions (somewhat scientific). Any metabolic process that releases energy from a sugar or other organic molecule, does not require oxygen or an electron transport system, and uses an organic molecule as the final electron acceptor (most scientific). The word ""ferment"" is derived from the" https://en.wikipedia.org/wiki/Size-asymmetric%20competition,"Size-asymmetric competition refers to situations in which larger individuals exploit disproportionately greater amounts of resources when competing with smaller individuals. This type of competition is common among plants but also exists among animals. Size-asymmetric competition usually results from large individuals monopolizing the resource by ""pre-emption"". i.e. exploiting the resource before smaller individuals are able to obtain it. Size-asymmetric competition has major effects on population structure and diversity within ecological communities. Definition of size asymmetry Resource competition can vary from complete symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource). The degree of size asymmetry can be described by the parameter θ in the following equation focusing on the partition of the resource r among n individuals of sizes Bj. ri refers to the amount of resource consumed by individual i in the neighbourhood of j. When θ =1, competition is perfectly size symmetric, e.g. if a large individual is twice the size of its smaller competitor, the large individual will acquire twice the amount of that resource (i.e. both individuals will exploit the same amount of resource per biomass unit). When θ >1 competition is size-asymmetric, e.g. if large individual is twice the size of its smaller competitor and θ =2, the large individual will acquire four times the amount of that resource (i.e. the large individual will exploit twice the amount of resource per biomass unit). As θ increases, competition becomes more size-asymmetric and larger plants get larger amounts of resource per unit biomass compared with smaller plants. Differences in size-asymmetry among resources in plant communities Competit" https://en.wikipedia.org/wiki/SCMOS,"sCMOS (scientific Complementary Metal–Oxide–Semiconductor) are a type of CMOS image sensor (CIS). These sensors are commonly used as components in specific observational scientific instruments, such as microscopes and telescopes. sCMOS image sensors offer extremely low noise, rapid frame rates, wide dynamic range, high quantum efficiency, high resolution, and a large field of view simultaneously in one image. The sCMOS technology was launched in 2009 during the Laser World of Photonics fair in Munich. The companies Andor Technology, Fairchild Imaging and PCO Imaging developed the technology for image sensors as a joint venture. Technical details Prior to the introduction of the technology, scientists were limited to using either CCD or EMCCD cameras, both of which had their own set of technical limitations. While back-illuminated electron-multiplying CCD (EMCCD) cameras are optimal for purposes requiring the lowest noise and dark currents, sCMOS technology's higher pixel count and lower cost result in its use in a wide range of precision applications. sCMOS devices can capture data in a global-shutter “snapshot” mode over all the pixels or rectangular subsets of pixels, and can also operate in a rolling-shutter mode. The cameras are available with a monochrome sCMOS image sensors or with RGB sCMOS image sensors. With sCMOS, digital information for each frame is generated rapidly and with an improved low-light image quality. The sCMOS sensor's low read noise and larger area provides a low-noise, large field-of-view (FOV) image that enables researchers to scan across a sample and capture high-quality images. Some disadvantages at this time, (2023), with sCMOS cameras versus related technologies are: sCMOS sensors tend be more expensive than traditional CMOS sensors. sCMOS sensors have a limited resolution compared to other types of sensors like CCD. In practice The New York University School of Medicine uses sCMOS cameras for their research. They were used t" https://en.wikipedia.org/wiki/-yllion,"-yllion (pronounced ) is a proposal from Donald Knuth for the terminology and symbols of an alternate decimal superbase system. In it, he adapts the familiar English terms for large numbers to provide a systematic set of names for much larger numbers. In addition to providing an extended range, -yllion also dodges the long and short scale ambiguity of -illion. Knuth's digit grouping is exponential instead of linear; each division doubles the number of digits handled, whereas the familiar system only adds three or six more. His system is basically the same as one of the ancient and now-unused Chinese numeral systems, in which units stand for 104, 108, 1016, 1032, ..., 102n, and so on (with an exception that the -yllion proposal does not use a word for thousand which the original Chinese numeral system has). Today the corresponding Chinese characters are used for 104, 108, 1012, 1016, and so on. Details and examples In Knuth's -yllion proposal: 1 to 999 have their usual names. 1000 to 9999 are divided before the 2nd-last digit and named ""foo hundred bar."" (e.g. 1234 is ""twelve hundred thirty-four""; 7623 is ""seventy-six hundred twenty-three"") 104 to 108 − 1 are divided before the 4th-last digit and named ""foo myriad bar"". Knuth also introduces at this level a grouping symbol (comma) for the numeral. So 382,1902 is ""three hundred eighty-two myriad nineteen hundred two."" 108 to 1016 − 1 are divided before the 8th-last digit and named ""foo myllion bar"", and a semicolon separates the digits. So 1,0002;0003,0004 is ""one myriad two myllion, three myriad four."" 1016 to 1032 − 1 are divided before the 16th-last digit and named ""foo byllion bar"", and a colon separates the digits. So 12:0003,0004;0506,7089 is ""twelve byllion, three myriad four myllion, five hundred six myriad seventy hundred eighty-nine."" etc. Each new number name is the square of the previous one — therefore, each new name covers twice as many digits. Knuth continues borrowing the traditional names changing" https://en.wikipedia.org/wiki/Physical%20computing,"Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware. Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development. Examples Physical computing is used in a wide variety of domains and applications. Education The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress. Art In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback. Product design Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way. Commercial applications Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution" https://en.wikipedia.org/wiki/Vinyl%20cutter,"A vinyl cutter is an entry level machine for making signs. Computer designed vector files with patterns and letters are directly cut on the roll of vinyl which is mounted and fed into the vinyl cutter through USB or serial cable. Vinyl cutters are mainly used to make signs, banners and advertisements. Advertisements seen on automobiles and vans are often made with vinyl cut letters. While these machines were designed for cutting vinyl, they can also cut through computer and specialty papers, as well as thicker items like thin sheets of magnet. In addition to sign business, vinyl cutters are commonly used for apparel decoration. To decorate apparel, a vector design needs to be cut in mirror image, weeded, and then heat applied using a commercial heat press or a hand iron for home use. Some businesses use their vinyl cutter to produce both signs and custom apparel. Many crafters also have vinyl cutters for home use. These require little maintenance and the vinyl can be bought in bulk relatively cheaply. Vinyl cutters are also often used by stencil artists to create single use or reusable stencil art and lettering How it works A vinyl cutter is a type of computer-controlled machine tool. The computer controls the movement of a sharp blade over the surface of the material as it would the nozzles of an ink-jet printer. This blade is used to cut out shapes and letters from sheets of thin self-adhesive plastic (vinyl). The vinyl can then be stuck to a variety of surfaces depending on the adhesive and type of material. To cut out a design, a vector-based image must be created using vector drawing software. Some vinyl cutters are marketed to small in-home businesses and require download and use of a proprietary editing software. The design is then sent to the cutter where it cuts along the vector paths laid out in the design. The cutter is capable of moving the blade on an X and Y axis over the material, cutting it into the required shapes. The vinyl material comes i" https://en.wikipedia.org/wiki/General%20communication%20channel,"The general communication channel (GCC) was defined by G.709 is an in-band side channel used to carry transmission management and signaling information within optical transport network elements. Two types of GCC are available: GCC0 – two bytes within OTUk overhead. GCC0 is terminated at every 3R (re-shaping, re-timing, re-amplification) point and used to carry GMPLS signaling protocol and/or management information. GCC1/2 – four bytes (each of two bytes) within ODUk overhead. These bytes are used for client end-to-end information and shouldn't be touched by the OTN equipment. In contrast to SONET/SDH where the data communication channel (DCC) has a constant data rate, GCC data rate depends on the OTN line rate. For example, GCC0 data rate in the case of OTU1 is ~333kbit/s, and for OTU2 its data rate is ~1.3 Mbit/s. Computer networking Optical Transport Network" https://en.wikipedia.org/wiki/Dye-and-pry,"Dye-n-Pry, also called Dye And Pry, Dye and Pull, Dye Staining, or Dye Penetrant, is a destructive analysis technique used on surface mount technology (SMT) components to either perform failure analysis or inspect for solder joint integrity. It is an application of dye penetrant inspection. Method Dye-n-Pry is a useful technique in which a dye penetrant material is used to inspect for interconnect failures in integrated circuits (IC). This is mostly commonly done on solder joints for ball grid array (BGA) components, although in some cases it can be done with other components or samples. The component of interest is submerged in a dye material, such as red steel dye, and placed under vacuum. This allows the dye to flow underneath the component and into any cracks or defects. The dye is then dried in an oven (preferably overnight) to prevent smearing during separation, which could lead to false results. The part of interest is mechanically separated from the printed circuit board (PCB) and inspected for the presence of dye. Any fracture surface or interface will have dye present, indicating the presence of cracks or open circuits. IPC-TM-650 Method 2.4.53 specifies a process for dye-n-pry. Use in failure analysis of electronics Dye-n-Pry is a useful failure analysis technique to detect cracking or open circuits in BGA solder joints. This has some practical advantages over other destructive techniques, such as cross sectioning, as it can inspect a full ball grid array which may consist of hundreds of solder joints. Cross sectioning, on the other hand, may only be able to inspect a single row of solder joints and requires a better initial idea of the failure site. Dye-n-pry can be useful for detecting several different failure modes. This includes pad cratering or solder joint fracture from mechanical drop/shock, thermal shock, or thermal cycling. This makes it useful technique to incorporate into a reliability test plan as part of the post test failure inspection. " https://en.wikipedia.org/wiki/Meissel%E2%80%93Mertens%20constant,"The Meissel–Mertens constant (named after Ernst Meissel and Franz Mertens), also referred to as Mertens constant, Kronecker's constant, Hadamard–de la Vallée-Poussin constant or the prime reciprocal constant, is a mathematical constant in number theory, defined as the limiting difference between the harmonic series summed only over the primes and the natural logarithm of the natural logarithm: Here γ is the Euler–Mascheroni constant, which has an analogous definition involving a sum over all integers (not just the primes). The value of M is approximately M ≈ 0.2614972128476427837554268386086958590516... . Mertens' second theorem establishes that the limit exists. The fact that there are two logarithms (log of a log) in the limit for the Meissel–Mertens constant may be thought of as a consequence of the combination of the prime number theorem and the limit of the Euler–Mascheroni constant. In popular culture The Meissel-Mertens constant was used by Google when bidding in the Nortel patent auction. Google posted three bids based on mathematical numbers: $1,902,160,540 (Brun's constant), $2,614,972,128 (Meissel–Mertens constant), and $3.14159 billion (π). See also Divergence of the sum of the reciprocals of the primes Prime zeta function" https://en.wikipedia.org/wiki/Emulator,"In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of a computer program in an electronic device to emulate (or imitate) another program or device. Many printers, for example, are designed to emulate HP LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will also run in the non-HP printer emulation and produce equivalent printing. Since at least the 1990s, many video game enthusiasts and hobbyists have used emulators to play classic arcade games from the 1980s using the games' original 1980s machine code and data, which is interpreted by a current-era system, and to emulate old video game consoles. A hardware emulator is an emulator which takes the form of a hardware device. Examples include the DOS-compatible card installed in some 1990s-era Macintosh computers, such as the Centris 610 or Performa 630, that allowed them to run personal computer (PC) software programs and field-programmable gate array-based hardware emulators. The Church-Turing thesis implies that theoretically, any operating environment can be emulated within any other environment, assuming memory limitations are ignored. However, in practice, it can be quite difficult, particularly when the exact behavior of the system to be emulated is not documented and has to be deduced through reverse engineering. It also says nothing about timing constraints; if the emulator does not perform as quickly as it did using the original hardware, the software inside the emulation may run much more slowly (possibly triggering timer interrupts that alter behavior). Types Most emulators just emulate a hardware architecture—if operating system firmware or" https://en.wikipedia.org/wiki/Interactive%20kiosk,"An interactive kiosk is a computer terminal featuring specialized hardware and software that provides access to information and applications for communication, commerce, entertainment, or education. By 2010, the largest bill pay kiosk network is AT&T for the phone customers which allows customers to pay their phone bills. Verizon and Sprint have similar units for their customers. Early interactive kiosks sometimes resembled telephone booths, but have been embraced by retail, food service, and hospitality to improve customer service and streamline operations. Interactive kiosks are typically placed in the high foot traffic settings such as shops, hotel lobbies, or airports. The integration of technology allows kiosks to perform a wide range of functions, evolving into self-service kiosks. For example, kiosks may enable users to order from a shop's catalog when items are not in stock, check out a library book, look up information about products, issue a hotel key card, enter a public utility bill account number to perform an online transaction, or collect cash in exchange for merchandise. Customized components such as coin hoppers, bill acceptors, card readers, and thermal printers enable kiosks to meet the owner's specialized needs. History The first self-service, interactive kiosk was developed in 1977 at the University of Illinois at Urbana–Champaign by a pre-med student, Murray Lappe. The content was created on the PLATO computer system and accessible by the plasma touch-screen interface. The plasma display panel was invented at the University of Illinois by Donald L. Bitzer. Lappe's kiosk, called The Plato Hotline allowed students and visitors to find movies, maps, directories, bus schedules, extracurricular activities, and courses. The first successful network of interactive kiosks used for commercial purposes was a project developed by the shoe retailer Florsheim Shoe Co., led by their executive VP, Harry Bock, installed circa 1985. The interactive kiosk " https://en.wikipedia.org/wiki/Free%20energy%20principle,"The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model with the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action. Overview In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled. It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is just the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience. The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a ""particular partition"" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system). The free energy principle is based on the Bayesian idea of the brain as " https://en.wikipedia.org/wiki/Nanobacterium,"Nanobacterium ( , pl. nanobacteria ) is the unit or member name of a former proposed class of living organisms, specifically cell-walled microorganisms, now discredited, with a size much smaller than the generally accepted lower limit for life (about 200 nm for bacteria, like mycoplasma). Originally based on observed nano-scale structures in geological formations (including one meteorite), the status of nanobacteria was controversial, with some researchers suggesting they are a new class of living organism capable of incorporating radiolabeled uridine, and others attributing to them a simpler, abiotic nature. One skeptic dubbed them ""the cold fusion of microbiology"", in reference to a notorious episode of supposed erroneous science. The term ""calcifying nanoparticles"" (CNPs) has also been used as a conservative name regarding their possible status as a life form. Research tends to agree that these structures exist, and appear to replicate in some way. However, the idea that they are living entities has now largely been discarded, and the particles are instead thought to be nonliving crystallizations of minerals and organic molecules. 1981–2000 In 1981 Francisco Torella and Richard Y. Morita described very small cells called ultramicrobacteria. Defined as being smaller than 300 nm, by 1982 MacDonell and Hood found that some could pass through a 200 nm membrane. Early in 1989, geologist Robert L. Folk found what he later identified as nannobacteria (written with double ""n""), that is, nanoparticles isolated from geological specimens in travertine from hot springs of Viterbo, Italy. Initially searching for a bacterial cause for travertine deposition, scanning electron microscope examination of the mineral where no bacteria were detectable revealed extremely small objects which appeared to be biological. His first oral presentation elicited what he called ""mostly a stony silence"", at the 1992 Geological Society of America's annual convention. He proposed that nanoba" https://en.wikipedia.org/wiki/Flat%20network,"A flat network is a computer network design approach that aims to reduce cost, maintenance and administration. Flat networks are designed to reduce the number of routers and switches on a computer network by connecting the devices to a single switch instead of separate switches. Unlike a hierarchical network design, the network is not physically separated using different switches. The topology of a flat network is not segmented or separated into different broadcast areas by using routers. Some such networks may use network hubs or a mixture of hubs and switches, rather than switches and routers, to connect devices to each other. Generally, all devices on the network are a part of the same broadcast area. Uses Flat networks are typically used in homes or small businesses where network requirements are low. Home networks usually do not require intensive security, or separation, because the network is often used to provide multiple computers access to the Internet. In such cases, a complex network with many switches is not required. Flat networks are also generally easier to administer and maintain because less complex switches or routers are being used. Purchasing switches can be costly, so flat networks can be implemented to help reduce the amount of switches that need to be purchased. Drawbacks Flat networks provide some drawbacks, including: Poor security – Because traffic travels through one switch, it is not possible to segment the networks into sections and prevent users from accessing certain parts of the network. It is easier for hackers to intercept data on the network. No redundancy – Since there is usually one switch, or a few devices, it is possible for the switch to fail. Since there is no alternative path, the network will become inaccessible and computers may lose connectivity. Scalability and speed – Connecting all the devices to one central switch, either directly or through hubs, increases the potential for collisions (due to hubs), reduced " https://en.wikipedia.org/wiki/Taxon,"In biology, a taxon (back-formation from taxonomy; : taxa) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking, especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion, especially in the context of rank-based (""Linnaean"") nomenclature (much less so under phylogenetic nomenclature). If a taxon is given a formal scientific name, its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping. Initial attempts at classifying and ordering organisms (plants and animals) were presumably set forth long ago by hunter-gatherers, as suggeted by the fairly sophisticated folk taxonomies. Much later, Aristotle, and later still, European scientists, like Magnol, Tournefort and Carl Linnaeus's system in Systema Naturae, 10th edition (1758),, as well as an unpublished work by Bernard and Antoine Laurent de Jussieu, contributed to this field. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck's Flore françoise, and Augustin Pyramus de Candolle's Principes élémentaires de botanique. Lamarck set out a system for the ""natural classification"" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a ""good"" or ""useful"" taxon is commonly taken to be one that reflects evolutionary relationships. Many modern systematists, such as advocates of phylogenetic nomenclature, use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Their basic unit, therefore, the clade is equivalent to the taxon, assuming that taxa should reflect evolutionary rela" https://en.wikipedia.org/wiki/Vector%20algebra%20relations,"The following are important identities in vector algebra. Identities that involve the magnitude of a vector , or the dot product (scalar product) of two vectors A·B, apply to vectors in any dimension. Identities that use the cross product (vector product) A×B are defined only in three dimensions. Magnitudes The magnitude of a vector A can be expressed using the dot product: In three-dimensional Euclidean space, the magnitude of a vector is determined from its three components using Pythagoras' theorem: Inequalities The Cauchy–Schwarz inequality: The triangle inequality: The reverse triangle inequality: Angles The vector product and the scalar product of two vectors define the angle between them, say θ: To satisfy the right-hand rule, for positive θ, vector B is counter-clockwise from A, and for negative θ it is clockwise. The Pythagorean trigonometric identity then provides: If a vector A = (Ax, Ay, Az) makes angles α, β, γ with an orthogonal set of x-, y- and z-axes, then: and analogously for angles β, γ. Consequently: with unit vectors along the axis directions. Areas and volumes The area Σ of a parallelogram with sides A and B containing the angle θ is: which will be recognized as the magnitude of the vector cross product of the vectors A and B lying along the sides of the parallelogram. That is: (If A, B are two-dimensional vectors, this is equal to the determinant of the 2 × 2 matrix with rows A, B.) The square of this expression is: where Γ(A, B) is the Gram determinant of A and B defined by: In a similar fashion, the squared volume V of a parallelepiped spanned by the three vectors A, B, C is given by the Gram determinant of the three vectors: Since A, B, C are three-dimensional vectors, this is equal to the square of the scalar triple product below. This process can be extended to n-dimensions. Addition and multiplication of vectors Commutativity of addition: . Commutativity of scalar product: . Anticommutativity of cross product" https://en.wikipedia.org/wiki/Understory,"In forestry and ecology, understory (American English), or understorey (Commonwealth English), also known as underbrush or undergrowth, includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor. Only a small percentage of light penetrates the canopy so understory vegetation is generally shade-tolerant. The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines and undergrowth. Small trees such as holly and dogwood are understory specialists. In temperate deciduous forests, many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upwards to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns, mosses, and fungi and encourages nutrient recycling, which provides favorable habitats for many animals and plants. Understory structure The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor. Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some sma" https://en.wikipedia.org/wiki/Smooth%20maximum,"In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function meaning a parametric family of functions such that for every , the function is smooth, and the family converges to the maximum function as . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, as and as . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family. Examples Boltzmann operator For large positive values of the parameter , the following formulation is a smooth, differentiable approximation of the maximum function. For negative values of the parameter that are large in absolute value, it approximates the minimum. has the following properties: as is the arithmetic mean of its inputs as The gradient of is closely related to softmax and is given by This makes the softmax function useful for optimization techniques that use gradient descent. This operator is sometimes called the Boltzmann operator, after the Boltzmann distribution. LogSumExp Another smooth maximum is LogSumExp: This can also be normalized if the are all non-negative, yielding a function with domain and range : The term corrects for the fact that by canceling out all but one zero exponential, and if all are zero. Mellowmax The mellowmax operator is defined as follows: It is a non-expansive operator. As , it acts like a maximum. As , it acts like an arithmetic mean. As , it acts like a minimum. This operator can be viewed as a particular instantiation of the quasi-arithmetic mean. It can also be derived from information theoretical principles as a way of regularizing policies with a cost function defined by KL divergence. The operator has previously been utilized in other" https://en.wikipedia.org/wiki/Filter%20%28signal%20processing%29,"In signal processing, a filter is a device or process that removes some unwanted components or features from a signal. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies or frequency bands. However, filters do not exclusively act in the frequency domain; especially in the field of image processing many other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used in electronics and telecommunication, in radio, television, audio recording, radar, control systems, music synthesis, image processing, computer graphics, and structural dynamics. There are many different bases of classifying filters and these overlap in many different ways; there is no simple hierarchical classification. Filters may be: non-linear or linear time-variant or time-invariant, also known as shift invariance. If the filter operates in a spatial domain then the characterization is space invariance. causal or non-causal: A filter is non-causal if its present output depends on future input. Filters processing time-domain signals in real time must be causal, but not filters acting on spatial domain signals or deferred-time processing of time-domain signals. analog or digital discrete-time (sampled) or continuous-time passive or active type of continuous-time filter infinite impulse response (IIR) or finite impulse response (FIR) type of discrete-time or digital filter. Linear continuous-time filters Linear continuous-time circuit is perhaps the most common meaning for filter in the signal processing world, and simply ""filter"" is often taken to be synonymous. These circuits are generally designed to remove certain frequencies and allow others to pass. Circuits that perform this function are generally linear in their response, or a" https://en.wikipedia.org/wiki/NesC,"nesC (pronounced ""NES-see"") is a component-based, event-driven programming language used to build applications for the TinyOS platform. TinyOS is an operating environment designed to run on embedded devices used in distributed wireless sensor networks. nesC is built as an extension to the C programming language with components ""wired"" together to run applications on TinyOS. The name nesC is an abbreviation of ""network embedded systems C"". Components and interfaces nesC programs are built out of components, which are assembled (""wired"") to form whole programs. Components have internal concurrency in the form of tasks. Threads of control may pass into a component through its interfaces. These threads are rooted either in a task or a hardware interrupt. Interfaces may be provided or used by components. The provided interfaces are intended to represent the functionality that the component provides to its user, the used interfaces represent the functionality the component needs to perform its job. In nesC, interfaces are bidirectional: They specify a set of functions to be implemented by the interface's provider (commands) and a set to be implemented by the interface's user (events). This allows a single interface to represent a complex interaction between components (e.g., registration of interest in some event, followed by a callback when that event happens). This is critical because all lengthy commands in TinyOS (e.g. send packet) are non-blocking; their completion is signaled through an event (send done). By specifying interfaces, a component cannot call the send command unless it provides an implementation of the sendDone event. Typically commands call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts. Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for be" https://en.wikipedia.org/wiki/Excretion,"Excretion is a process in which metabolic waste is eliminated from an organism. In vertebrates this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. Excretion is an essential process in all forms of life. For example, in mammals, urine is expelled through the urethra, which is part of the excretory system. In unicellular organisms, waste products are discharged directly through the surface of the cell. During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion. Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a by product generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them. In animals, the main excretory products are carbon dioxide, ammoni" https://en.wikipedia.org/wiki/Squaring%20the%20circle,"Squaring the circle is a problem in geometry first proposed in Greek mathematics. It is the challenge of constructing a square with the area of a given circle by using only a finite number of steps with a compass and straightedge. The difficulty of the problem raised the question of whether specified axioms of Euclidean geometry concerning the existence of lines and circles implied the existence of such a square. In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number. That is, is not the root of any polynomial with rational coefficients. It had been known for decades that the construction would be impossible if were transcendental, but that fact was not proven until 1882. Approximate constructions with any given non-perfect accuracy exist, and many such constructions have been found. Despite the proof that it is impossible, attempts to square the circle have been common in pseudomathematics (i.e. the work of mathematical cranks). The expression ""squaring the circle"" is sometimes used as a metaphor for trying to do the impossible. The term quadrature of the circle is sometimes used as a synonym for squaring the circle. It may also refer to approximate or numerical methods for finding the area of a circle. In general, quadrature or squaring may also be applied to other plane figures. History Methods to calculate the approximate area of a given circle, which can be thought of as a precursor problem to squaring the circle, were known already in many ancient cultures. These methods can be summarized by stating the approximation to that they produce. In around 2000 BCE, the Babylonian mathematicians used the approximation and at approximately the same time the ancient Egyptian mathematicians used Over 1000 years later, the Old Testament Books of Kings used the simpler approximation Ancient Indian mathematics, as recorded in the Shatapatha Brahmana and Shulba Sutras" https://en.wikipedia.org/wiki/Psammophyte,"A psammophyte is a plant that grows in sandy and often unstable soils. Psammophytes are commonly found growing on beaches, deserts, and sand dunes. Because they thrive in these challenging or inhospitable habitats, psammophytes are considered extremophiles, and are further classified as a type of psammophile. Etymology The word ""psammophyte"" consists of two Greek roots, psamm-, meaning ""sand"", and -phyte, meaning ""plant"". The term ""psammophyte"" first entered English in the early twentieth century via German botanical terminology. Description Psammophytes are found in many different plant families, so may not share specific morphological or phytochemical traits. They also come in a variety of plant life-forms, including annual ephemerals, perennials, subshrubs, hemicryptophytes, and many others. What the many diverse psammophytes have in common is a resilience to harsh or rapidly fluctuating environmental factors, such as shifting soils, strong winds, intense sunlight exposure, or saltwater exposure, depending on the habitat. Psammophytes often have specialized traits, such as unusually tenacious or resilient roots that enable them to anchor and thrive despite various environmental stressors. Those growing in arid regions have evolved highly efficient physiological mechanisms that enable them to survive despite limited water availability. Distribution and habitat Psammophytes grow in regions all over the world and can be found on sandy, unstable soils of beaches, deserts, and sand dunes. In China's autonomous Inner Mongolia region, psammophytic woodlands are found in steppe habitats. Ecology Psammophytes often play an important ecological role by contributing some degree of soil stabilization in their sandy habitats. They can also play an important role in soil nutrient dynamics. Depending on the factors at play at a given site, psammophyte communities exhibit varying degrees of species diversity. For example, in the dunes of the Sahara Desert, psammophyte commu" https://en.wikipedia.org/wiki/List%20of%20countries%20by%20integrated%20circuit%20exports,"The following is a list of countries by integrated circuit exports. Data is for 2019, in millions of United States dollars, as reported by International Trade Centre. Currently the top twenty countries are listed. See also List of flat panel display manufacturers List of integrated circuit manufacturers List of solid-state drive manufacturers List of system on a chip suppliers" https://en.wikipedia.org/wiki/Self-organization,"Self-organization, also called spontaneous order in the social sciences, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability. Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples of self-organization include crystallization, thermal convection of fluids, chemical oscillation, animal swarming, neural circuits, and black markets. Overview Self-organization is realized in the physics of non-equilibrium processes, and in chemical reactions, where it is often characterized as self-assembly. The concept has proven useful in biology, from the molecular to the ecosystem level. Cited examples of self-organizing behaviour also appear in the literature of many other disciplines, both in the natural sciences and in the social sciences (such as economics or anthropology). Self-organization has also been observed in mathematical systems such as cellular automata. Self-organization is an example of the related concept of emergence. Self-organization relies on four basic ingredients: strong dynamical non-linearity, often (though not necessarily) involving positive and negative feedback balance of exploitation and exploration multiple interactions among components availability of energy (to overcome the natural tendency toward entropy, or loss of free energy) Principles The cybernetician William Ross Ashby formulated the original p" https://en.wikipedia.org/wiki/Obligate,"As an adjective, obligate means ""by necessity"" (antonym facultative) and is used mainly in biology in phrases such as: Obligate aerobe, an organism that cannot survive without oxygen Obligate anaerobe, an organism that cannot survive in the presence of oxygen Obligate air-breather, a term used in fish physiology to describe those that respire entirely from the atmosphere Obligate biped, Bipedalism designed to walk on two legs Obligate carnivore, an organism dependent for survival on a diet of animal flesh. Obligate chimerism, a kind of organism with two distinct sets of DNA, always Obligate hibernation, a state of inactivity in which some organisms survive conditions of insufficiently available resources. Obligate intracellular parasite, a parasitic microorganism that cannot reproduce without entering a suitable host cell Obligate parasite, a parasite that cannot reproduce without exploiting a suitable host Obligate photoperiodic plant, a plant that requires sufficiently long or short nights before it initiates flowering, germination or similarly functions Obligate symbionts, organisms that can only live together in a symbiosis See also Opportunism (biological) Biology terminology" https://en.wikipedia.org/wiki/List%20of%20birds%20by%20flight%20heights,"This is a list of birds by flight height. Birds by flight height See also Organisms at high altitude List of birds by flight speed" https://en.wikipedia.org/wiki/Smash%20and%20Grab%20%28biology%29,"Smash and Grab is the name given to a technique developed by Charles S. Hoffman and Fred Winston used in molecular biology to rescue plasmids from yeast transformants into Escherichia coli, also known as E. coli, in order to amplify and purify them. In addition, it can be used to prepare yeast genomic DNA (and DNA from tissue samples) for Southern blot analyses or polymerase chain reaction (PCR)." https://en.wikipedia.org/wiki/Generalized%20Lagrangian%20mean,"In continuum mechanics, the generalized Lagrangian mean (GLM) is a formalism – developed by – to unambiguously split a motion into a mean part and an oscillatory part. The method gives a mixed Eulerian–Lagrangian description for the flow field, but appointed to fixed Eulerian coordinates. Background In general, it is difficult to decompose a combined wave–mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models, when the main interest is in the mean motion – slowly varying at scales much larger than those of the individual undulations. From a series of postulates, arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part. The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field – following individual fluid parcels – that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space. The specification of mean properties for the oscillatory part of the flow, like: Stokes drift, wave action, pseudomomentum and pseudoenergy – and the associated conservation laws – arise naturally when using the GLM method. The GLM concept can also be incorporated into variational principles of fluid flow. Notes" https://en.wikipedia.org/wiki/Collision%20detection,"Collision detection is the computational problem of detecting the intersection of two or more objects. Collision detection is a classic issue of computational geometry and has applications in various computing fields, primarily in computer graphics, computer games, computer simulations, robotics and computational physics. Collision detection algorithms can be divided into operating on 2D and 3D objects. Overview In physical simulation, experiments such as playing billiards are conducted. The physics of bouncing billiard balls are well understood, under the umbrella of rigid body motion and elastic collisions. An initial description of the situation would be given, with a very precise physical description of the billiard table and balls, as well as initial positions of all the balls. Given a force applied to the cue ball (probably resulting from a player hitting the ball with their cue stick), we want to calculate the trajectories, precise motion and eventual resting places of all the balls with a computer program. A program to simulate this game would consist of several portions, one of which would be responsible for calculating the precise impacts between the billiard balls. This particular example also turns out to be ill conditioned: a small error in any calculation will cause drastic changes in the final position of the billiard balls. Video games have similar requirements, with some crucial differences. While computer simulation needs to simulate real-world physics as precisely as possible, computer games need to simulate real-world physics in an acceptable way, in real time and robustly. Compromises are allowed, so long as the resulting simulation is satisfying to the game players. Collision detection in computer simulation Physical simulators differ in the way they react on a collision. Some use the softness of the material to calculate a force, which will resolve the collision in the following time steps like it is in reality. This is very CPU intensiv" https://en.wikipedia.org/wiki/FpgaC,"FpgaC is a compiler for a subset of the C programming language, which produces digital circuits that will execute the compiled programs. The circuits may use FPGAs or CPLDs as the target processor for reconfigurable computing, or even ASICs for dedicated applications. FpgaC's goal is to be an efficient High Level Language (HLL) for reconfigurable computing, rather than a Hardware Description Language (HDL) for building efficient custom hardware circuits. History The historical roots of FpgaC are in the Transmogrifier C 3.1 (TMCC) HDL, a 1996 BSD licensed Open source offering from University of Toronto. TMCC is one of the first FPGA C compilers, with work starting in 1994 and presented at IEEE's FCCM95. This predated the evolution from the Handel language to Handel-C work done shortly afterward at Oxford University Computing Laboratory. TMCC was renamed FpgaC for the initial SourceForge project release, with syntax modifications to start the evolution to ANSI C. Later development has removed all explicit HDL syntax from the language, and increased the subset of C supported. By capitalizing on ANSI C C99 extensions, the same functionality is now available by inference rather than non-standard language extensions. This shift away from non-standard HDL extensions was influenced in part by Streams-C from Los Alamos National Laboratory (now available commercially as Impulse C). In the years that have followed, compiling ANSI C for execution as FPGA circuits has become a mainstream technology. Commercial FPGA C compilers are available from multiple vendors, and ANSI C based System Level Tools have gone mainstream for system description and simulation languages. FPGA based Reconfigurable Computing offerings from industry leaders like Altera, Silicon Graphics, Seymour Cray's SRC Computers, and Xilinx have capitalized on two decades of government and university reconfigurable computing research. External links Transmogrifier C Homepage Oxford Handel-C FPGA System Le" https://en.wikipedia.org/wiki/Outline%20of%20Gottfried%20Wilhelm%20Leibniz,"The following outline is provided as an overview of and topical guide to Gottfried Wilhelm Leibniz: Gottfried Wilhelm (von) Leibniz (1 July 1646 [O.S. 21 June] – 14 November 1716); German polymath, philosopher logician, mathematician. Developed differential and integral calculus at about the same time and independently of Isaac Newton. Leibniz earned his keep as a lawyer, diplomat, librarian, and genealogist for the House of Hanover, and contributed to diverse areas. His impact continues to reverberate, especially his original contributions in logic and binary representations. Achievements and contributions Devices Leibniz calculator Logic Alphabet of human thought Calculus ratiocinator Mathematics Calculus General Leibniz rule Leibniz formula for Leibniz integral rule Philosophy Best of all possible worlds Characteristica universalis Identity of indiscernibles Pre-established harmony Principle of sufficient reason Physics Personal life Leibniz's political views Leibniz's religious views Family Major works by Leibniz De Arte Combinatoria Discourse on Metaphysics, (text at wikisource) Monadology, (text at wikisource) New Essays on Human Understanding Nova Methodus pro Maximis et Minimis Protogaea Théodicée Manuscript archives and translations of Leibniz's works Leibniz Archive (Hannover) at the Leibniz Research Center - Hannover Leibniz Archive (Potsdam) at the Brandenburg Academy of Humanities and Sciences Leibniz Archive (Munster), Leibniz-Forschungsstelle Münster digital edition Leibniz Archive (Berlin), digital edition Donald Rutherford's translations at UCSD Lloyd Strickland's translations at leibniz-translations.com Journals focused on Leibniz studies The Leibniz Review Studia Leibnitiana Organizations named after Leibniz Leibniz Association Leibniz College, affiliated with the University of Tübingen Leibniz Institute of European History Leibniz Institute for Polymer Research Leibniz Society of Nor" https://en.wikipedia.org/wiki/Remanence,"Remanence or remanent magnetization or residual magnetism is the magnetization left behind in a ferromagnetic material (such as iron) after an external magnetic field is removed. Colloquially, when a magnet is ""magnetized"", it has remanence. The remanence of magnetic materials provides the magnetic memory in magnetic storage devices, and is used as a source of information on the past Earth's magnetic field in paleomagnetism. The word remanence is from remanent + -ence, meaning ""that which remains"". The equivalent term residual magnetization is generally used in engineering applications. In transformers, electric motors and generators a large residual magnetization is not desirable (see also electrical steel) as it is an unwanted contamination, for example a magnetization remaining in an electromagnet after the current in the coil is turned off. Where it is unwanted, it can be removed by degaussing. Sometimes the term retentivity is used for remanence measured in units of magnetic flux density. Types Saturation remanence The default definition of magnetic remanence is the magnetization remaining in zero field after a large magnetic field is applied (enough to achieve saturation). The effect of a magnetic hysteresis loop is measured using instruments such as a vibrating sample magnetometer; and the zero-field intercept is a measure of the remanence. In physics this measure is converted to an average magnetization (the total magnetic moment divided by the volume of the sample) and denoted in equations as Mr. If it must be distinguished from other kinds of remanence, then it is called the saturation remanence or saturation isothermal remanence (SIRM) and denoted by Mrs. In engineering applications the residual magnetization is often measured using a B-H analyzer, which measures the response to an AC magnetic field (as in Fig. 1). This is represented by a flux density Br. This value of remanence is one of the most important parameters characterizing permanent ma" https://en.wikipedia.org/wiki/Protistology,"Protistology is a scientific discipline devoted to the study of protists, a highly diverse group of eukaryotic organisms. All eukaryotes apart from animals, plants and fungi are considered protists. Its field of study therefore overlaps with the more traditional disciplines of phycology, mycology, and protozoology, just as protists embrace mostly unicellular organisms described as algae, some organisms regarded previously as primitive fungi, and protozoa (""animal"" motile protists lacking chloroplasts). They are a paraphyletic group with very diverse morphologies and lifestyles. Their sizes range from unicellular picoeukaryotes only a few micrometres in diameter to multicellular marine algae several metres long. History The history of the study of protists has its origins in the 17th century. Since the beginning, the study of protists has been intimately linked to developments in microscopy, which have allowed important advances in the understanding of these organisms due to their generally microscopic nature. Among the pioneers was Anton van Leeuwenhoek, who observed a variety of free-living protists and in 1674 named them “very little animalcules”. During the 18th century studies on the Infusoria were dominated by Christian Gottfried Ehrenberg and Félix Dujardin. The term ""protozoology"" has become dated as understanding of the evolutionary relationships of the eukaryotes has improved, and is frequently replaced by the term ""protistology"". For example, the Society of Protozoologists, founded in 1947, was renamed International Society of Protistologists in 2005. However, the older term is retained in some cases (e.g., the Polish journal Acta Protozoologica). Journals and societies Dedicated academic journals include: Archiv für Protistenkunde, 1902-1998, Germany (renamed Protist, 1998-); Archives de la Societe Russe de Protistologie, 1922-1928, Russia; Journal of Protozoology, 1954-1993, USA (renamed Journal of Eukaryotic Microbiology, 1993-); Acta Protoz" https://en.wikipedia.org/wiki/Ergodic%20hypothesis,"In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i.e., that all accessible microstates are equiprobable over a long period of time. Liouville's theorem states that, for a Hamiltonian system, the local density of microstates following a particle path through phase space is constant as viewed by an observer moving with the ensemble (i.e., the convective time derivative is zero). Thus, if the microstates are uniformly distributed in phase space initially, they will remain so at all times. But Liouville's theorem does not imply that the ergodic hypothesis holds for all Hamiltonian systems. The ergodic hypothesis is often assumed in the statistical analysis of computational physics. The analyst would assume that the average of a process parameter over time and the average over the statistical ensemble are the same. This assumption—that it is as good to simulate a system over a long time as it is to make many independent realizations of the same system—is not always correct. (See, for example, the Fermi–Pasta–Ulam–Tsingou experiment of 1953.) Assumption of the ergodic hypothesis allows proof that certain types of perpetual motion machines of the second kind are impossible. Systems that are ergodic are said to have the property of ergodicity; a broad range of systems in geometry, physics, and probability are ergodic. Ergodic systems are studied in ergodic theory. Phenomenology In macroscopic systems, the timescales over which a system can truly explore the entirety of its own phase space can be sufficiently large that the thermodynamic equilibrium state exhibits some form of ergodicity breaking. A common example is that of spontaneous magnetisation in ferromagnetic systems, whereby below the Curie temperature the system preferentially adopts a non-zero magnetisation even though the er" https://en.wikipedia.org/wiki/Field-replaceable%20unit,"A field-replaceable unit (FRU) is a printed circuit board, part, or assembly that can be quickly and easily removed from a computer or other piece of electronic equipment, and replaced by the user or a technician without having to send the entire product or system to a repair facility. FRUs allow a technician lacking in-depth product knowledge to isolate faults and replace faulty components. The granularity of FRUs in a system impacts total cost of ownership and support, including the costs of stocking spare parts, where spares are deployed to meet repair time goals, how diagnostic tools are designed and implemented, levels of training for field personnel, whether end-users can do their own FRU replacement, etc. Other equipment FRUs are not strictly confined to computers but are also part of many high-end, lower-volume consumer and commercial products. For example, in military aviation, electronic components of line-replaceable units, typically known as shop-replaceable units (SRUs), are repaired at field-service backshops, usually by a ""remove and replace"" repair procedure, with specialized repair performed at centralized depot or by the OEM. History Many vacuum tube computers had FRUs: Pluggable units containing one or more vacuum tubes and various passive components Most transistorized and integrated circuit-based computers had FRUs: Computer modules, circuit boards containing discrete transistors and various passive components. Examples: IBM SMS cards DEC System Building Blocks cards DEC Flip-Chip cards Circuit boards containing monolithic ICs and/or hybrid ICs, such as IBM SLT cards. Vacuum tubes themselves are usually FRUs. For a short period starting in the late 1960s, some television set manufacturers made solid-state televisions with FRUs instead of a single board attached to the chassis. However modern televisions put all the electronics on one large board to reduce manufacturing costs. Trends As the sophistication and complexity of multi-replaceable" https://en.wikipedia.org/wiki/Sampson%20%28horse%29,"Sampson (later renamed Mammoth) was a Shire horse gelding born in 1846 and bred by Thomas Cleaver at Toddington Mills, Bedfordshire, England. According to Guinness World Records (1986) he was the tallest horse ever recorded, by 1850 measuring or 21.25 hands in height. His peak weight was estimated at See also List of historical horses" https://en.wikipedia.org/wiki/Kew%20Rule,"The Kew Rule was used by some authors to determine the application of synonymous names in botanical nomenclature up to about 1906, but was and still is contrary to codes of botanical nomenclature including the International Code of Nomenclature for algae, fungi, and plants. Index Kewensis, a publication that aimed to list all botanical names for seed plants at the ranks of species and genus, used the Kew Rule until its Supplement IV was published in 1913 (prepared 1906–1910). The Kew Rule applied rules of priority in a more flexible way, so that when transferring a species to a new genus, there was no requirement to retain the epithet of the original species name, and future priority of the new name was counted from the time the species was transferred to the new genus. The effect has been summarized as ""nomenclature used by an established monographer or in a major publication should be adopted"". This is contrary to the modern article 11.4 of the Code of Nomenclature. History Beginnings The first discussion in print of what was to become known as the Kew Rule appears to have occurred in 1877 between Henry Trimen and Alphonse Pyramus de Candolle. Trimen did not think it was reasonable for older names discovered in the literature to destabilize the nomenclature that had been well accepted:Probably all botanists are agreed that it is very desirable to retain when possible old specific names, but some of the best authors do not certainly consider themselves bound by any generally accepted rule in this matter. Still less will they be inclined to allow that a writer is at liberty, as M. de Candolle thinks, to reject the specific appellations made by an author whose genera are accepted, in favour of older ones in other genera. It will appear to such that to do this is to needlessly create in each case another synonym. The end The first botanical code of nomenclature that declared itself to be binding was the 1906 publication that followed from the 1905 International " https://en.wikipedia.org/wiki/Hat%20notation,"A ""hat"" (circumflex (ˆ)), placed over a symbol is a mathematical notation with various uses. Estimated value In statistics, a circumflex (ˆ), called a ""hat"", is used to denote an estimator or an estimated value. For example, in the context of errors and residuals, the ""hat"" over the letter indicates an observable estimate (the residuals) of an unobservable quantity called (the statistical errors). Another example of the hat operator denoting an estimator occurs in simple linear regression. Assuming a model of , with observations of independent variable data and dependent variable data , the estimated model is of the form where is commonly minimized via least squares by finding optimal values of and for the observed data. Hat matrix In statistics, the hat matrix H projects the observed values y of response variable to the predicted values ŷ: Cross product In screw theory, one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation, it can be represented as a matrix. The hat operator takes a vector and transforms it into its equivalent matrix. For example, in three dimensions, Unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or ""hat"", as in (pronounced ""v-hat""). Fourier transform The Fourier transform of a function is traditionally denoted by . See also Exterior algebra Top-hat filter Circumflex, noting that precomposed glyphs [letter-with-circumflex] do not exist for all letters." https://en.wikipedia.org/wiki/List%20of%20theorems,"This is a list of notable theorems. Lists of theorems and similar statements include: List of fundamental theorems List of lemmas List of conjectures List of inequalities List of mathematical proofs List of misnamed theorems Most of the results below come from pure mathematics, but some are from theoretical physics, economics, and other applied fields. 0–9 2-factor theorem (graph theory) 15 and 290 theorems (number theory) 2π theorem (Riemannian geometry) A B C D E F G H I J K L M N O P Q R S T U V W Z Theorems" https://en.wikipedia.org/wiki/Neutron-velocity%20selector,"A neutron-velocity selector is a device that allows neutrons of defined velocity to pass while absorbing all other neutrons, to produce a monochromatic neutron beam. It has the appearance of a many-bladed turbine. The blades are coated with a strongly neutron-absorbing material, such as Boron-10. Neutron-velocity selectors are commonly used in neutron research facility to produce a monochromatic beam of neutrons. Due to physical limitations of materials and motors, limiting the maximum speed of rotation of the blades, these devices are only useful for relatively slow neutrons." https://en.wikipedia.org/wiki/Zero-point%20energy,"Zero-point energy (ZPE) is the lowest possible energy that a quantum mechanical system may have. Unlike in classical mechanics, quantum systems constantly fluctuate in their lowest energy state as described by the Heisenberg uncertainty principle. Therefore, even at absolute zero, atoms and molecules retain some vibrational motion. Apart from atoms and molecules, the empty space of the vacuum also has these properties. According to quantum field theory, the universe can be thought of not as isolated particles but continuous fluctuating fields: matter fields, whose quanta are fermions (i.e., leptons and quarks), and force fields, whose quanta are bosons (e.g., photons and gluons). All these fields have zero-point energy. These fluctuating zero-point fields lead to a kind of reintroduction of an aether in physics since some systems can detect the existence of this energy. However, this aether cannot be thought of as a physical medium if it is to be Lorentz invariant such that there is no contradiction with Einstein's theory of special relativity. The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Physicists Richard Feynman and John Wheeler calculated the zero-point radiation of the vacuum to be an order of magnitude greater than nuclear energy, with a single light bulb containing enough energy to boil all the world's oceans. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. A popular proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point" https://en.wikipedia.org/wiki/Teacher%20Institute%20for%20Evolutionary%20Science,"The Teacher Institute for Evolutionary Science (TIES) is a project of the Richard Dawkins Foundation for Reason and Science and a program of the Center for Inquiry which provides free workshops and materials to elementary, middle school, and, more recently, high school science teachers to enable them to effectively teach evolution based on the Next Generation Science Standards. History In 2013, Bertha Vazquez, TIES director and middle school science teacher in Miami, met Richard Dawkins at the University of Miami and discussed evolution education with him and a number of science professors. The discussion surrounded the issue of teachers feeling unprepared to teach evolution. This encounter and the understanding that teachers learn the most from each other inspired her to conduct workshops on evolution for her fellow teachers. After hearing about Vazquez's work, Dawkins followed up with a visit to Vazquez's school in 2014 to speak to teachers from the Miami-Dade County school district. Dawkins eventually asked Vazquez if she would be willing to take her workshop project nationwide. With the encouragement of Dawkins and funding from his foundation, and also with encouragement from Robyn Blumner of the Center for Inquiry, the Teacher Institute for Evolutionary Science began offering workshops in 2015. Activity The first TIES workshop was in April 2015 in collaboration with the Miami Science Museum. A total of ten workshops took place in 2015. Since then, the program has expanded, as of 2020, to over 200 workshops in all 50 states. While Bertha Vazquez presented many of the workshops earlier on, over 80 presenters are now active in the nationwide program. Presenters are usually high school or college biology educators in the states in which their workshops take place, and workshops take into account the given state's evolution education standards. Workshops vary in length, and in cases of longer workshops or webinars, scientists and other relevant guests are also" https://en.wikipedia.org/wiki/Irreducibility%20%28mathematics%29,"In mathematics, the concept of irreducibility is used in several ways. A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module. Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure. In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here. Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of " https://en.wikipedia.org/wiki/Maximum%20theorem,"The maximum theorem provides conditions for the continuity of an optimized function and the set of its maximizers with respect to its parameters. The statement was first proven by Claude Berge in 1959. The theorem is primarily used in mathematical economics and optimal control. Statement of theorem Maximum Theorem. Let and be topological spaces, be a continuous function on the product , and be a compact-valued correspondence such that for all . Define the marginal function (or value function) by and the set of maximizers by . If is continuous (i.e. both upper and lower hemicontinuous) at , then is continuous and is upper hemicontinuous with nonempty and compact values. As a consequence, the may be replaced by . The maximum theorem can be used for minimization by considering instead. Interpretation The theorem is typically interpreted as providing conditions for a parametric optimization problem to have continuous solutions with regard to the parameter. In this case, is the parameter space, is the function to be maximized, and gives the constraint set that is maximized over. Then, is the maximized value of the function and is the set of points that maximize . The result is that if the elements of an optimization problem are sufficiently continuous, then some, but not all, of that continuity is preserved in the solutions. Proof Throughout this proof we will use the term neighborhood to refer to an open set containing a particular point. We preface with a preliminary lemma, which is a general fact in the calculus of correspondences. Recall that a correspondence is closed if its graph is closed. Lemma. If are correspondences, is upper hemicontinuous and compact-valued, and is closed, then defined by is upper hemicontinuous. Let , and suppose is an open set containing . If , then the result follows immediately. Otherwise, observe that for each we have , and since is closed there is a neighborhood of in which whenever . The collecti" https://en.wikipedia.org/wiki/Underwood%20Dudley,"Underwood Dudley (born January 6, 1937) is an American mathematician and writer. His popular works include several books describing crank mathematics by pseudomathematicians who incorrectly believe they have squared the circle or done other impossible things. Career Dudley was born in New York City. He received bachelor's and master's degrees from the Carnegie Institute of Technology and a PhD from the University of Michigan. His academic career consisted of two years at Ohio State University followed by 37 at DePauw University, from which he retired in 2004. He edited the College Mathematics Journal and the Pi Mu Epsilon Journal, and was a Pólya Lecturer for the Mathematical Association of America (MAA) for two years. He is the discoverer of the Dudley triangle. Publications Dudley's popular books include Mathematical Cranks (MAA 1992, ), The Trisectors (MAA 1996, ), and Numerology: Or, What Pythagoras Wrought (MAA 1997, ). Dudley won the Trevor Evans Award for expository writing from the MAA in 1996. Dudley has also written and edited straightforward mathematical works such as Readings for Calculus (MAA 1993, ) and Elementary Number Theory (W.H. Freeman 1978, ). In 2009, he authored ""A Guide to Elementary Number Theory"" (MAA, 2009, ), published under Mathematical Association of America's Dolciani Mathematical Expositions. Lawsuit In 1995, Dudley was one of several people sued by William Dilworth for defamation because Mathematical Cranks included an analysis of Dilworth's ""A correction in set theory"", an attempted refutation of Cantor's diagonal method. The suit was dismissed in 1996 due to failure to state a claim. The dismissal was upheld on appeal in a decision written by jurist Richard Posner. From the decision: ""A crank is a person inexplicably obsessed by an obviously unsound idea—a person with a bee in his bonnet. To call a person a crank is to say that because of some quirk of temperament he is wasting his time pursuing a line of thought that is " https://en.wikipedia.org/wiki/Vivaldi%20coordinates,"Vivaldi Coordinate System is a decentralized Network Coordinate System, that allows for distributed systems such as peer-to-peer networks to estimate round-trip time (RTT) between arbitrary nodes in a network. Through this scheme, network topology awareness can be used to tune the network behavior to more efficiently distribute data. For example, in a peer-to-peer network, more responsive identification and delivery of content can be achieved. In the Azureus application, Vivaldi is used to improve the performance of the distributed hash table that facilitates query matches. Design The algorithm behind Vivaldi is an optimization algorithm that figures out the most stable configuration of points in a euclidean space such that distances between the points are as close as possible to real-world measured distances. In effect, the algorithm attempts to embed the multi-dimensional space that is latency measurements between computers into a low-dimensional euclidean space. A good analogy might be a spring-and-mass system in 3D space where each node is a mass and each connection between nodes are springs. The default lengths of the springs are the measured RTTs between nodes, and when the system is simulated, the coordinates of nodes correspond to the resulting 3D positions of the masses in the lowest energy state of the system. This design is taken from previous work in the field, the contribution that Vivaldi makes is to make this algorithm run in parallel across all the nodes in the network. Advantages Vivaldi can theoretically can scale indefinitely. The Vivaldi algorithm is relatively simple implement. Drawbacks Vivaldi's coordinates are points in a euclidean space, which requires the predicted distances to obey the triangle inequality as well as euclidean symmetry. However, there are many triangle inequality violations (TIVs) and symmetry violations on the Internet, mostly because of inefficient routing or distance distortion because connections on the inter" https://en.wikipedia.org/wiki/Anal%20pore,"The anal pore or cytoproct is a structure in various single-celled eukaryotes where waste is ejected after the nutrients from food have been absorbed into the cytoplasm. In ciliates, the anal pore (cytopyge) and cytostome are the only regions of the pellicle that are not covered by ridges, cilia or rigid covering. They serve as analogues of, respectively, the anus and mouth of multicellular organisms. The cytopyge's thin membrane allows vacuoles to be merged into the cell wall and emptied. Location The anal pore is an exterior opening of microscopic organisms through which undigested food waste, water, or gas are expelled from the body. It is also referred to as a cytoproct.  This structure is found in different unicellular eukaryotes like paramecium organelles. The anal pore is located on the ventral surface, usually in the posterior half of the cell. The anal pore itself is actually a structure made up of two components: piles of fibres, and microtubules. Function Digested nutrients from the vacuole pass into the cytoplasm, making the vacuole shrink and moves to the anal pore, where it ruptures to release the waste content to the environment on the outside of the cell. The cytoproct is used for the excretion of indigestible debris contained in the food vacuoles. In paramecium, the anal pore is a region of pellicle that is not covered by ridges and cilia, and the area has thin pellicles that allow the vacuoles to be merged into the cell surface to be emptied. Most micro-organisms possess an anal pore for excretion and are usually an opening on the pellicle to eject out indigestible debris. The opening and closing of the cytoproct resemble a reversible ring of tissue fusion occurring between the inner and outer layers located at the aboral end. An anal pore is not a permanently visible structure as it appears at defecation and disappears afterward. In ciliates, the anal cytostomes and cytopyge pore regions are not covered by either ridges or cilia or hard co" https://en.wikipedia.org/wiki/Fault%20Tolerant%20Ethernet,"Fault Tolerant Ethernet (FTE) is proprietary protocol created by Honeywell. Designed to provide rapid network redundancy, on top of spanning tree protocol. Each node is connected twice to a single LAN through the dual network interface controllers. The driver and the FTE enabled components allow network communication to occur over an alternate path when the primary path fails. Default time before failure is detected, is Diagnostic Interval (1000ms) multiplier with Disjoin Multiplier (3), for a 3000ms recovery time. Similar to Switch Fault Tolerance (SFT) in windows and mode=1 (active-backup) in Linux. Supported hardware and software Windows 7/2003 or newer Honeywell Control Firewall (CF9) Honeywell C300 Controller Honeywell Series 8 I/O Technical overview Uses Multicast ( 234.5.6.7), for FTE community. Recommended maximum of 300 FTE nodes and 200 single connected Ethernet nodes (A machine with to network cards is considered as two separate single connected Ethernet nodes). Recommended to have separate broadcast/multicast domain , for different FTE communities. Recommended maximum of 3 tiers of switches. Default UDP Source Port: 47837 Default UDP Destination Port : 51966" https://en.wikipedia.org/wiki/Index%20of%20information%20theory%20articles,"This is a list of information theory topics. A Mathematical Theory of Communication algorithmic information theory arithmetic coding channel capacity Communication Theory of Secrecy Systems conditional entropy conditional quantum entropy confusion and diffusion cross-entropy data compression entropic uncertainty (Hirchman uncertainty) entropy encoding entropy (information theory) Fisher information Hick's law Huffman coding information bottleneck method information theoretic security information theory joint entropy Kullback–Leibler divergence lossless compression negentropy noisy-channel coding theorem (Shannon's theorem) principle of maximum entropy quantum information science range encoding redundancy (information theory) Rényi entropy self-information Shannon–Hartley theorem Information theory Information theory topics" https://en.wikipedia.org/wiki/Tunneling%20protocol,"In computer networks, a tunneling protocol is a communication protocol which allows for the movement of data from one network to another. It involves allowing private network communications to be sent across a public network (such as the Internet) through a process called encapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel. The tunneling protocol works by using the data portion of a packet (the payload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol. Uses A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as running IPv6 over IPv4. Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporate network address to a remote user whose physical network address is not part of the corporate network. Circumventing firewall policy Users can also use tunneling to ""sneak through"" a firewall, using a protocol that the firewall would normally block, but ""wrapped"" inside a protocol that the firewall does not block, such as HTTP. If the firewall policy does not specifically exclude this kind of ""wrapping"", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies). Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, an" https://en.wikipedia.org/wiki/Aerobic%20organism,"An aerobic organism or aerobe is an organism that can survive and grow in an oxygenated environment. The ability to exhibit aerobic respiration may yield benefits to the aerobic organism, as aerobic respiration yields more energy than anaerobic respiration. Energy production of the cell involves the synthesis of ATP by an enzyme called ATP synthase. In aerobic respiration, ATP synthase is coupled with an electron transport chain in which oxygen acts as a terminal electron acceptor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in ""quasi-suspended animation"", were found in organically poor sediments, up to 101.5 million years old, 250 feet below the seafloor in the South Pacific Gyre (SPG) (""the deadest spot in the ocean""), and could be the longest-living life forms ever found. Types Obligate aerobes need oxygen to grow. In a process known as cellular respiration, these organisms use oxygen to oxidize substrates (for example sugars and fats) and generate energy. Facultative anaerobes use oxygen if it is available, but also have anaerobic methods of energy production. Microaerophiles require oxygen for energy production, but are harmed by atmospheric concentrations of oxygen (21% O2). Aerotolerant anaerobes do not use oxygen but are not harmed by it. When an organism is able to survive in both oxygen and anaerobic environments, the use of the Pasteur effect can distinguish between facultative anaerobes and aerotolerant organisms. If the organism is using fermentation in an anaerobic environment, the addition of oxygen will cause facultative anaerobes to suspend fermentation and begin using oxygen for respiration. Aerotolerant organisms must continue fermentation in the presence of oxygen. Facultative organisms grow in both oxygen rich media and oxygen free media. Aerobic Respiration Aerobic organisms use a process called aerobic respiration to create ATP from ADP and a phosphate. Glucose (a monosaccharide) is oxidized to power the " https://en.wikipedia.org/wiki/Biomineralization,"Biomineralization, also written biomineralisation, is the process by which living organisms produce minerals, often resulting in hardened or stiffened mineralized tissues. It is an extremely widespread phenomenon: all six taxonomic kingdoms contain members that are able to form minerals, and over 60 different minerals have been identified in organisms. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. These minerals often form structural features such as sea shells and the bone in mammals and birds. Organisms have been producing mineralized skeletons for the past 550 million years. Calcium carbonates and calcium phosphates are usually crystalline, but silica organisms (sponges, diatoms...) are always non-crystalline minerals. Other examples include copper, iron, and gold deposits involving bacteria. Biologically formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity-sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). In terms of taxonomic distribution, the most common biominerals are the phosphate and carbonate salts of calcium that are used in conjunction with organic polymers such as collagen and chitin to give structural support to bones and shells. The structures of these biocomposite materials are highly controlled from the nanometer to the macroscopic level, resulting in complex architectures that provide multifunctional properties. Because this range of control over mineral growth is desirable for materials engineering applications, there is interest in understanding and elucidating the mechanisms of biologically-controlled biomineralization. Types Mineralization can be subdivided into different categories depending on the following: the organisms or processes that create chemical conditions necessary for mineral formation, the origin of the substrate at the site of m" https://en.wikipedia.org/wiki/Routing%20bridge,"A routing bridge or RBridge, also known as a TRILL switch, is a network device that implements the TRILL protocol, as specified by the IETF and should not be confused with BRouters (Bridging Routers). RBridges are compatible with previous IEEE 802.1 customer bridges as well as IPv4 and IPv6 routers and end nodes. They are invisible to current IP routers and, like routers, RBridges terminate the bridge spanning tree protocol. The RBridges in a campus share connectivity information amongst themselves using the IS-IS link-state protocol. A link-state protocol is one in which connectivity is broadcast to all the RBridges, so that each RBridge knows about all the other RBridges, and the connectivity between them. This gives RBridges enough information to compute pair-wise optimal paths for unicast, and calculate distribution trees for delivery of frames either to destinations whose location is unknown or to multicast or broadcast groups. IS-IS was chosen as for this purpose because: it runs directly over Layer 2, so it can be run without configuration (no IP addresses need to be assigned) it is easy to extend by defining new TLV (type-length-value) data elements and sub-elements for carrying TRILL information. To mitigate temporary loop issues, RBridges forward based on a header with a hop count. RBridges also specify the next hop RBridge as the frame destination when forwarding unicast frames across a shared-media link, which avoids spawning additional copies of frames during a temporary loop. A Reverse Path Forwarding Check and other checks are performed on multi-destination frames to further control potentially looping traffic." https://en.wikipedia.org/wiki/Operational%20design%20domain,"Operational design domain (ODD) is a term for a set of operating conditions for an automated system, often used in the field of autonomous vehicles. These operating conditions include environmental, geographical and time of day constraints, traffic and roadway characteristics. The ODD is used by manufacturers to indicate where their product will operate safely. The concept of ODD indicates that autonomated systems have limitations and that they should operate within predefined restrictions to ensure safety and performance. Defining an ODD is important for developers and regulators to establish clear expectations and communicate the intended operating conditions of automated systems. Beyond self-driving cars, ODD is also used for autonomous ships, autonomous trains, agricultural robots, and other robots. ODD definition by standards Structure of ODD A report by US Department of Transportation subdivides an ODD description into six top-level categories and further immediate subcategories. The top-level categories are the physical infrastructure, operational constraints, objects, connectivity, environemental conditions and zones. The physical infrastructure includes subcategories for roadway types, surfaces, edges and geometry. The operational constraints include subcategories for speed limits and traffic conditions. Environmental conditions include weather, illumination, and similar sub-categories. Zones include subcategories like regions, states, school areas, construction sites and similar. Examples In 2022, Mercedes Benz announced a product with a new ODD, which is Level 3 autonomous driving at 130 km/h. See also Scenario (vehicular automation)" https://en.wikipedia.org/wiki/Beamforming,"Beamforming or spatial filtering is a signal processing technique used in sensor arrays for directional signal transmission or reception. This is achieved by combining elements in an antenna array in such a way that signals at particular angles experience constructive interference while others experience destructive interference. Beamforming can be used at both the transmitting and receiving ends in order to achieve spatial selectivity. The improvement compared with omnidirectional reception/transmission is known as the directivity of the array. Beamforming can be used for radio or sound waves. It has found numerous applications in radar, sonar, seismology, wireless communications, radio astronomy, acoustics and biomedicine. Adaptive beamforming is used to detect and estimate the signal of interest at the output of a sensor array by means of optimal (e.g. least-squares) spatial filtering and interference rejection. Techniques To change the directionality of the array when transmitting, a beamformer controls the phase and relative amplitude of the signal at each transmitter, in order to create a pattern of constructive and destructive interference in the wavefront. When receiving, information from different sensors is combined in a way where the expected pattern of radiation is preferentially observed. For example, in sonar, to send a sharp pulse of underwater sound towards a ship in the distance, simply simultaneously transmitting that sharp pulse from every sonar projector in an array fails because the ship will first hear the pulse from the speaker that happens to be nearest the ship, then later pulses from speakers that happen to be further from the ship. The beamforming technique involves sending the pulse from each projector at slightly different times (the projector closest to the ship last), so that every pulse hits the ship at exactly the same time, producing the effect of a single strong pulse from a single powerful projector. The same technique can be c" https://en.wikipedia.org/wiki/Priority%20inversion,"In computer science, priority inversion is a scenario in scheduling in which a high-priority task is indirectly superseded by a lower-priority task effectively inverting the assigned priorities of the tasks. This violates the priority model that high-priority tasks can only be prevented from running by higher-priority tasks. Inversion occurs when there is a resource contention with a low-priority task that is then preempted by a medium-priority task. Formulation Consider two tasks H and L, of high and low priority respectively, either of which can acquire exclusive use of a shared resource R. If H attempts to acquire R after L has acquired it, then H becomes blocked until L relinquishes the resource. Sharing an exclusive-use resource (R in this case) in a well-designed system typically involves L relinquishing R promptly so that H (a higher-priority task) does not stay blocked for excessive periods of time. Despite good design, however, it is possible that a third task M of medium priority becomes runnable during L's use of R. At this point, M being higher in priority than L, preempts L (since M does not depend on R), causing L to not be able to relinquish R promptly, in turn causing H—the highest-priority process—to be unable to run (that is, H suffers unexpected blockage indirectly caused by lower-priority tasks like M). Consequences In some cases, priority inversion can occur without causing immediate harm—the delayed execution of the high-priority task goes unnoticed, and eventually, the low-priority task releases the shared resource. However, there are also many situations in which priority inversion can cause serious problems. If the high-priority task is left starved of the resources, it might lead to a system malfunction or the triggering of pre-defined corrective measures, such as a watchdog timer resetting the entire system. The trouble experienced by the Mars Pathfinder lander in 1997 is a classic example of problems caused by priority inversion in rea" https://en.wikipedia.org/wiki/In%20situ%20adaptive%20tabulation,"In situ adaptive tabulation (ISAT) is an algorithm for the approximation of nonlinear relationships. ISAT is based on multiple linear regressions that are dynamically added as additional information is discovered. The technique is adaptive as it adds new linear regressions dynamically to a store of possible retrieval points. ISAT maintains error control by defining finer granularity in regions of increased nonlinearity. A binary tree search transverses cutting hyper-planes to locate a local linear approximation. ISAT is an alternative to artificial neural networks that is receiving increased attention for desirable characteristics, namely: scales quadratically with increased dimension approximates functions with discontinuities maintains explicit bounds on approximation error controls local derivatives of the approximating function delivers new data training without re-optimization ISAT was first proposed by Stephen B. Pope for computational reduction of turbulent combustion simulation and later extended to model predictive control. It has been generalized to an ISAT framework that operates based on any input and output data regardless of the application. An improved version of the algorithm was proposed just over a decade later of the original publication, including new features that allow you to improve the efficiency of the search for tabulated data, as well as error control. See also Predictive analytics Radial basis function network Recurrent neural networks Support vector machine Tensor product network" https://en.wikipedia.org/wiki/Open%20Compute%20Project,"The Open Compute Project (OCP) is an organization that shares designs of data center products and best practices among companies, including Arm, Meta, IBM, Wiwynn, Intel, Nokia, Google, Microsoft, Seagate Technology, Dell, Rackspace, Hewlett Packard Enterprise, NVIDIA, Cisco, Goldman Sachs, Fidelity, Lenovo and Alibaba Group. Project structure The Open Compute Project Foundation is a 501(c)(6) non-profit incorporated in the state of Delaware. Rocky Bullock serves as the Foundation's CEO and has a seat on the board of directors. As of July 2020, there are 7 members who serve on the board of directors which is made up of one individual member and six organizational members. Mark Roenigk (Facebook) is the Foundation's president and chairman. Andy Bechtolsheim is the individual member. In addition to Mark Roenigk who represents Facebook, other organizations on the Open Compute board of directors include Intel (Rebecca Weekly), Microsoft (Kushagra Vaid), Google (Partha Ranganathan), and Rackspace (Jim Hawkins). A current list of members can be found on the opencompute.org website. History The Open Compute Project began in Facebook as an internal project in 2009 called ""Project Freedom"". The hardware designs and engineering team were led by Amir Michael (Manager, Hardware Design) and sponsored by Jonathan Heiliger (VP, Technical Operations) and Frank Frankovsky (Director, Hardware Design and Infrastructure). The three would later open source the designs of Project Freedom and co-found the Open Compute Project. The project was announced at a press event at Facebook's headquarters in Palo Alto on April 7, 2011. OCP projects The Open Compute Project Foundation maintains a number of OCP projects, such as: Server designs Two years after Open Compute Project had started, with regards to a more modular server design, it was admitted that ""the new design is still a long way from live data centers"". However, some aspects published were used in Facebook's Prineville dat" https://en.wikipedia.org/wiki/Research%20software%20engineering,"Research software engineering is the use of software engineering practices in research applications. The term was proposed in a research paper in 2010 in response to an empirical survey on tools used for software development in research projects. It started to be used in United Kingdom in 2012, when it was needed to define the type of software development needed in research. This focuses on reproducibility, reusability, and accuracy of data analysis and applications created for research. Support Various type of associations and organisations have been created around this role to support the creation of posts in universities and research institutes. In 2014 a Research Software Engineer Association was created in UK, which attracted 160 members in the first three months. Other countries like the Netherlands, Germany, and the USA followed creating similar communities and there are similar efforts being pursued in Asia, Australia, Canada, New Zealand, the Nordic countries, and Belgium. In January 2021 the International Council of RSE Associations was introduced. UK counts almost 30 universities and institutes with groups that provide access to software expertise to different areas of research. Additionally, the Engineering and Physical Sciences Research Council created a Research Software Engineer fellowship to promote this role and help the creation of RSE groups across UK, with calls in 2015, 2017, and 2020. The world first RSE conference took place in UK in September 2016, it was repeated in 2017, 2018 and 2019, and is planned again for 2020. In 2019 the first national RSE conferences in Germany and the Netherlands were held, next editions were planned for 2020 and then cancelled. The SORSE (A Series of Online Research Software Events) community was established in late‑2020 in response to the COVID-19 pandemic and ran its first online event on 2September 2020. See also Open Energy Modelling Initiative — relevant here because the bulk of the development occur" https://en.wikipedia.org/wiki/Data%20center,"A data center (American English) or data centre (Commonwealth English) is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems. Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a small town. Estimated global data center electricity consumption in 2022 was 240-340 TWh, or roughly 1-1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers. History Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. During the boom of the microcomputer industry, and especia" https://en.wikipedia.org/wiki/List%20of%20longest-living%20organisms,"This is a list of the longest-living biological organisms: the individual(s) (or in some instances, clones) of a species with the longest natural maximum life spans. For a given species, such a designation may include: The oldest known individual(s) that are currently alive, with verified ages. Verified individual record holders, such as the longest-lived human, Jeanne Calment, or the longest-lived domestic cat, Creme Puff. The definition of ""longest-living"" used in this article considers only the observed or estimated length of an individual organism's natural lifespan – that is, the duration of time between its birth or conception, or the earliest emergence of its identity as an individual organism, and its death – and does not consider other conceivable interpretations of ""longest-living"", such as the length of time between the earliest appearance of a species in the fossil record and the present (the historical ""age"" of the species as a whole), the time between a species' first speciation and its extinction (the phylogenetic ""lifespan"" of the species), or the range of possible lifespans of a species' individuals. This list includes long-lived organisms that are currently still alive as well as those that are dead. Determining the length of an organism's natural lifespan is complicated by many problems of definition and interpretation, as well as by practical difficulties in reliably measuring age, particularly for extremely old organisms and for those that reproduce by asexual cloning. In many cases the ages listed below are estimates based on observed present-day growth rates, which may differ significantly from the growth rates experienced thousands of years ago. Identifying the longest-living organisms also depends on defining what constitutes an ""individual"" organism, which can be problematic, since many asexual organisms and clonal colonies defy one or both of the traditional colloquial definitions of individuality (having a distinct genotype and havin" https://en.wikipedia.org/wiki/Planetary%20protection,"Planetary protection is a guiding principle in the design of an interplanetary mission, aiming to prevent biological contamination of both the target celestial body and the Earth in the case of sample-return missions. Planetary protection reflects both the unknown nature of the space environment and the desire of the scientific community to preserve the pristine nature of celestial bodies until they can be studied in detail. There are two types of interplanetary contamination. Forward contamination is the transfer of viable organisms from Earth to another celestial body. Back contamination is the transfer of extraterrestrial organisms, if they exist, back to the Earth's biosphere. History The potential problem of lunar and planetary contamination was first raised at the International Astronautical Federation VIIth Congress in Rome in 1956. In 1958 the U.S. National Academy of Sciences (NAS) passed a resolution stating, “The National Academy of Sciences of the United States of America urges that scientists plan lunar and planetary studies with great care and deep concern so that initial operations do not compromise and make impossible forever after critical scientific experiments.” This led to creation of the ad hoc Committee on Contamination by Extraterrestrial Exploration (CETEX), which met for a year and recommended that interplanetary spacecraft be sterilized, and stated, “The need for sterilization is only temporary. Mars and possibly Venus need to remain uncontaminated only until study by manned ships becomes possible”. In 1959, planetary protection was transferred to the newly formed Committee on Space Research (COSPAR). COSPAR in 1964 issued Resolution 26 affirming that: In 1967, the US, USSR, and UK ratified the United Nations Outer Space Treaty. The legal basis for planetary protection lies in Article IX of this treaty: This treaty has since been signed and ratified by 104 nation-states. Another 24 have signed but not ratified. All the current spa" https://en.wikipedia.org/wiki/Contracted%20Bianchi%20identities,"In general relativity and tensor calculus, the contracted Bianchi identities are: where is the Ricci tensor, the scalar curvature, and indicates covariant differentiation. These identities are named after Luigi Bianchi, although they had been already derived by Aurel Voss in 1880. In the Einstein field equations, the contracted Bianchi identity ensures consistency with the vanishing divergence of the matter stress–energy tensor. Proof Start with the Bianchi identity Contract both sides of the above equation with a pair of metric tensors: The first term on the left contracts to yield a Ricci scalar, while the third term contracts to yield a mixed Ricci tensor, The last two terms are the same (changing dummy index n to m) and can be combined into a single term which shall be moved to the right, which is the same as Swapping the index labels l and m on the left side yields See also Bianchi identities Einstein tensor Einstein field equations General theory of relativity Ricci calculus Tensor calculus Riemann curvature tensor Notes" https://en.wikipedia.org/wiki/NetFPGA,"The NetFPGA project is an effort to develop open-source hardware and software for rapid prototyping of computer network devices. The project targeted academic researchers, industry users, and students. It was not the first platform of its kind in the networking community. NetFPGA used an FPGA-based approach to prototyping networking devices. This allows users to develop designs that are able to process packets at line-rate, a capability generally unafforded by software based approaches. NetFPGA focused on supporting developers that can share and build on each other's projects and IP building blocks. History The project began in 2007 as a research project at Stanford University called the NetFPGA-1G. The 1G was originally designed as a tool to teach students about networking hardware architecture and design. The 1G platform consisted of a PCI board with a Xilinx Virtex-II pro FPGA and 4 x 1GigE interfaces feeding into it, along with a downloadable code repository containing an IP library and a few example designs. The project grew and by the end of 2010 more than 1,800 1G boards sold to over 150 educational institutions spanning 15 countries. During that growth the 1G not only gained popularity as a tool for education, but increasingly as a tool for research. By 2011 over 46 academic papers had been published regarding research that used the NetFPGA-1G platform. Additionally, over 40 projects were contributed to the 1G code repository by the end of 2010. In 2009 work began in secrecy on the NetFPGA-10G with 4 x 10 GigE interfaces. The 10G board was also designed with a much larger FPGA, more memory, and a number of other upgrades. The first release of the platform, codenamed “Howth”, was planned for December 24, 2010, and includes a repository similar to that of the 1G, containing a small IP library and two reference designs. From a platform design perspective, the 10G is diverging in a few significant ways from the 1G platform. For instance, the interface standa" https://en.wikipedia.org/wiki/Logic%20synthesis,"In computer engineering, logic synthesis is a process by which an abstract specification of desired circuit behavior, typically at register transfer level (RTL), is turned into a design implementation in terms of logic gates, typically by a computer program called a synthesis tool. Common examples of this process include synthesis of designs specified in hardware description languages, including VHDL and Verilog. Some synthesis tools generate bitstreams for programmable logic devices such as PALs or FPGAs, while others target the creation of ASICs. Logic synthesis is one step in circuit design in the electronic design automation, the others are place and route and verification and validation. History The roots of logic synthesis can be traced to the treatment of logic by George Boole (1815 to 1864), in what is now termed Boolean algebra. In 1938, Claude Shannon showed that the two-valued Boolean algebra can describe the operation of switching circuits. In the early days, logic design involved manipulating the truth table representations as Karnaugh maps. The Karnaugh map-based minimization of logic is guided by a set of rules on how entries in the maps can be combined. A human designer can typically only work with Karnaugh maps containing up to four to six variables. The first step toward automation of logic minimization was the introduction of the Quine–McCluskey algorithm that could be implemented on a computer. This exact minimization technique presented the notion of prime implicants and minimum cost covers that would become the cornerstone of two-level minimization. Nowadays, the much more efficient Espresso heuristic logic minimizer has become the standard tool for this operation. Another area of early research was in state minimization and encoding of finite-state machines (FSMs), a task that was the bane of designers. The applications for logic synthesis lay primarily in digital computer design. Hence, IBM and Bell Labs played a pivotal role in the early" https://en.wikipedia.org/wiki/Methyl%20green,"Methyl green (CI 42585) is a cationic or positive charged stain related to Ethyl Green that has been used for staining DNA since the 19th century. It has been used for staining cell nuclei either as a part of the classical Unna-Pappenheim stain or as a nuclear counterstain ever since. In recent years, its fluorescent properties, when bound to DNA, have positioned it as useful for far-red imaging of live cell nuclei. Fluorescent DNA staining is routinely used in cancer prognosis. Methyl green also emerges as an alternative stain for DNA in agarose gels, fluorometric assays, and flow cytometry. It has also been shown that it can be used as an exclusion viability stain for cells. Its interaction with DNA has been shown to be non-intercalating, in other words, not inserting itself into the DNA, but instead electrostatic with the DNA major groove. It is used in combination with pyronin in the methyl green–pyronin stain, which stains and differentiates DNA and RNA. When excited at 244 or 388 nm in a neutral aqueous solution, methyl green produces a fluorescent emission at 488 or 633 nm, respectively. The presence or absence of DNA does not affect these fluorescence behaviors. When binding DNA under neutral aqueous conditions, methyl green also becomes fluorescent in the far red with an excitation maximum of 633 nm and an emission maximum of 677 nm. Commercial Methyl green preparations are often contaminated with Crystal violet. Crystal violet can be removed by chloroform extraction." https://en.wikipedia.org/wiki/Programmer%20%28hardware%29,"A programmer, device programmer, chip programmer, device burner, or PROM writer is a piece of electronic equipment that arranges written software or firmware to configure programmable non-volatile integrated circuits, called programmable devices. The target devices include PROM, EPROM, EEPROM, Flash memory, eMMC, MRAM, FeRAM, NVRAM, PLDs, PLAs, PALs, GALs, CPLDs, FPGAs, and microcontrollers. Function Programmer hardware has two variants. One is configuring the target device itself with a socket on the programmer. Another is configuring the device on a printed circuit board. In the former case, the target device is inserted into a socket (usually ZIF) on top of the programmer. If the device is not a standard DIP packaging, a plug-in adapter board, which converts the footprint with another socket, is used. In the latter case, device programmer is directly connected to the printed circuit board by a connector, usually with a cable. This way is called on-board programming, in-circuit programming, or in-system programming. Afterwards the data is transferred from the programmer into the device by applying signals through the connecting pins. Some devices have a serial interface for receiving the programming data (including JTAG interface). Other devices require the data on parallel pins, followed by a programming pulse with a higher voltage for programming the data into the device. Usually device programmers are connected to a personal computer through a parallel port, USB port, or LAN interface. A software program on the computer then transfers the data to the programmer, selects the device and interface type, and starts the programming process to read/ write/ erase/ blank the data inside the device. Types There are four general types of device programmers: Automated programmers (multi-programming sites, having a set of sockets) for mass production. These systems utilize robotic pick and place handlers with on-board sites. This allows for high volume and compl" https://en.wikipedia.org/wiki/Hydrodynamic%20delivery,"Hydrodynamic Delivery (HD) is a method of DNA insertion in rodent models. Genes are delivered via injection into the bloodstream of the animal, and are expressed in the liver. This protocol is helpful to determine gene function, regulate gene expression, and develop pharmaceuticals in vivo. Methods Hydrodynamic Delivery was developed as a way to insert genes without viral infection (transfection). The procedure requires a high-volume DNA solution to be inserted into the veins of the rodent using a high-pressure needle. The volume of the DNA is typically 8-10% equal to 8-10% of the animal's body weight, and is injected within 5-7 seconds. The pressure of the insertion leads to cardiac congestion (increased pressure in the heart), allowing the DNA solution to flow through the bloodstream and accumulate in the liver. The pressure expands the pores in the cell membrane, forcing the DNA molecules into the parenchyma, or the functional cells of the organ. In the liver, these cells are the hepatocytes. In less than two minutes after the injection, the pressure returns to natural levels, and the pores shrink back, trapping the DNA inside of the cell. After injection, the majority of genes are expressed in the liver of the animal over a long period of time. Originally developed to insert DNA, further developments in HD have enabled the insertion of RNA, proteins, and short oligonucleotides into cells. Applications The development of Hydrodynamic Delivery methods allows an alternative way to study in vivo experiments. This method has shown to be effective in small mammals, without the potential risks and complications of viral transfection. Applications of these studies include: testing regulatory elements, generating antibodies, analyzing gene therapy techniques, and developing models for diseases. Typically, genes are expressed in the liver, but the procedure can be altered to express genes in kidneys, lungs, muscles, heart, and pancreas. Gene therapy Hydrodynamic De" https://en.wikipedia.org/wiki/Kobon%20triangle%20problem,"The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number N(k) of nonoverlapping triangles whose sides lie on an arrangement of k lines. Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement. Known upper bounds Saburo Tamura proved that the number of nonoverlapping triangles realizable by lines is at most . G. Clément and J. Bader proved more strongly that this bound cannot be achieved when is congruent to 0 or 2 (mod 6). The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function, as: Solutions yielding this number of triangles are known when is 3, 4, 5, 6, 7, 8, 9, 13, 15 or 17. For k = 10, 11 and 12, the best solutions known reach a number of triangles one less than the upper bound. Known constructions Given an optimal solution with k0 > 3 lines, other Kobon triangle solution numbers can be found for all ki-values where by using the procedure by D. Forge and J. L. Ramirez Alfonsin. For example, the solution for k0 = 5 leads to the maximal number of nonoverlapping triangles for k = 5, 9, 17, 33, 65, .... Examples See also Roberts's triangle theorem, on the minimum number of triangles that lines can form" https://en.wikipedia.org/wiki/Making%20Mathematics%20with%20Needlework,"Making Mathematics with Needlework: Ten Papers and Ten Projects is an edited volume on mathematics and fiber arts. It was edited by Sarah-Marie Belcastro and Carolyn Yackel, and published in 2008 by A K Peters, based on a meeting held in 2005 in Atlanta by the American Mathematical Society. Topics The book includes ten different mathematical fiber arts projects, by eight contributors. An introduction provides a history of the connections between mathematics, mathematics education, and the fiber arts. Each of its ten project chapters is illustrated by many color photographs and diagrams, and is organized into four sections: an overview of the project, a section on the mathematics connected to it, a section of ideas for using the project as a teaching activity, and directions for constructing the project. Although there are some connections between topics, they can be read independently of each other, in any order. The thesis of the book is that directed exercises in fiber arts construction can help teach both mathematical visualization and concepts from three-dimensional geometry. The book uses knitting, crochet, sewing, and cross-stitch, but deliberately avoids weaving as a topic already well-covered in mathematical fiber arts publications. Projects in the book include a quilt in the form of a Möbius strip, a ""bidirectional hat"" connected to the theory of Diophantine equations, a shawl with a fractal design, a knitted torus connecting to discrete approximations of curvature, a sampler demonstrating different forms of symmetry in wallpaper group, ""algebraic socks"" with connections to modular arithmetic and the Klein four-group, a one-sided purse sewn together following a description by Lewis Carroll, a demonstration of braid groups on a cable-knit pillow, an embroidered graph drawing of an Eulerian graph, and topological pants. Beyond belcastro and Yackel, the contributors to the book include Susan Goldstine, Joshua Holden, Lana Holden, Mary D. Shepherd, Amy F. Sz" https://en.wikipedia.org/wiki/HCMOS,"HCMOS (""high-speed CMOS"") is the set of specifications for electrical ratings and characteristics, forming the 74HC00 family, a part of the 7400 series of integrated circuits. The 74HC00 family followed, and improved upon, the 74C00 series (which provided an alternative CMOS logic family to the 4000 series but retained the part number scheme and pinouts of the standard 7400 series (especially the 74LS00 series)) . Some specifications include: DC supply voltage DC input voltage range DC output voltage range input rise and fall times output rise and fall times HCMOS also stands for high-density CMOS. The term was used to describe microprocessors, and other complex integrated circuits, which use a smaller manufacturing processes, producing more transistors per area. The Freescale 68HC11 is an example of a popular HCMOS microcontroller. Variations HCT stands for high-speed CMOS with transistor–transistor logic voltages. These devices are similar to the HCMOS types except they will operate at standard TTL power supply voltages and logic input levels. This allows for direct pin-to-pin compatible CMOS replacements to reduce power consumption without loss of speed. HCU stands for high-speed CMOS un-buffered. This type of CMOS contains no buffer and is ideal for crystals and other ceramic oscillators needing linearity. VHCMOS, or AHC, stands for very high-speed CMOS or advanced high-speed CMOS. Typical propagation delay time is between 3 ns and 4 ns. The speed is similar to Bipolar Schottky transistor TTL. AHCT stands for advanced high-speed CMOS with TTL inputs. Typical propagation delay time is between 5 ns and 6 ns." https://en.wikipedia.org/wiki/Universal%20dielectric%20response,"In physics and electrical engineering, the universal dielectric response, or UDR, refers to the observed emergent behaviour of the dielectric properties exhibited by diverse solid state systems. In particular this widely observed response involves power law scaling of dielectric properties with frequency under conditions of alternating current, AC. First defined in a landmark article by A. K. Jonscher in Nature published in 1977, the origins of the UDR were attributed to the dominance of many-body interactions in systems, and their analogous RC network equivalence. The universal dielectric response manifests in the variation of AC Conductivity with frequency and is most often observed in complex systems consisting of multiple phases of similar or dissimilar materials. Such systems, which can be called heterogenous or composite materials, can be described from a dielectric perspective as a large network consisting of resistor and capacitor elements, known also as an RC network. At low and high frequencies, the dielectric response of heterogeneous materials is governed by percolation pathways. If a heterogeneous material is represented by a network in which more than 50% of the elements are capacitors, percolation through capacitor elements will occur. This percolation results in conductivity at high and low frequencies that is directly proportional to frequency. Conversely, if the fraction of capacitor elements in the representative RC network (Pc) is lower than 0.5, dielectric behavior at low and high frequency regimes is independent of frequency. At intermediate frequencies, a very broad range of heterogeneous materials show a well-defined emergent region, in which power law correlation of admittance to frequency is observed. The power law emergent region is the key feature of the UDR. In materials or systems exhibiting UDR, the overall dielectric response from high to low frequencies is symmetrical, being centered at the middle point of the emergent region, whic" https://en.wikipedia.org/wiki/Kill%20pill,"In computing, a kill pill is a mechanism or a technology designed to render systems useless either by user command, or under a predefined set of circumstances. Kill pill technology is most commonly used to disable lost or stolen devices for security purposes, but can also be used for the enforcement of rules and contractual obligations. Applications Lost and stolen devices Kill pill technology is used prominently in smartphones, especially in the disablement of lost or stolen devices. A notable example is Find My iPhone, a service that allows the user to password protect or wipe their iDevice(s) remotely, aiding in the protection of private data. Similar applications exist for other smartphone operating systems, including Android, BlackBerry, and Windows Phone. Anti-piracy measure Kill pill technology has been notably used as an anti-piracy measure. Windows Vista was released with the ability to severely limit its own functionality if it was determined that the copy was obtained through piracy. The feature was later dropped after complaints that false positives caused genuine copies of Vista to act as though they were pirated. Removal of malicious software The concept of a kill pill is also applied to the remote removal by a server of malicious files or applications from a client's system. Such technology is a standard component of most handheld computing devices, mainly due to their generally more limited operating systems and means of obtaining applications. Such functionality is also reportedly available to applications downloaded from the Windows Store on Windows 8 operating systems. Vehicles Kill pill technology is used frequently in vehicles for a variety of reasons. Remote vehicle disablement can be used to prevent a vehicle from starting, to prevent it from moving, and to prevent the vehicle's continued operation. Non-remotely, vehicles can require driver recognition before starting or moving, such as asking for a password or some form of biometrics fro" https://en.wikipedia.org/wiki/Ethnomathematics,"In mathematics education, ethnomathematics is the study of the relationship between mathematics and culture. Often associated with ""cultures without written expression"", it may also be defined as ""the mathematics which is practised among identifiable cultural groups"". It refers to a broad cluster of ideas ranging from distinct numerical and mathematical systems to multicultural mathematics education. The goal of ethnomathematics is to contribute both to the understanding of culture and the understanding of mathematics, and mainly to lead to an appreciation of the connections between the two. Development and meaning The term ""ethnomathematics"" was introduced by the Brazilian educator and mathematician Ubiratan D'Ambrosio in 1977 during a presentation for the American Association for the Advancement of Science. Since D'Ambrosio put forth the term, people - D'Ambrosio included - have struggled with its meaning (""An etymological abuse leads me to use the words, respectively, ethno and mathema for their categories of analysis and tics from (from techne)"".). The following is a sampling of some of the definitions of ethnomathematics proposed between 1985 and 2006: ""The mathematics which is practiced among identifiable cultural groups such as national-tribe societies, labour groups, children of certain age brackets and professional classes"". ""The mathematics implicit in each practice"". ""The study of mathematical ideas of a non-literate culture"". ""The codification which allows a cultural group to describe, manage and understand reality"". ""Mathematics…is conceived as a cultural product which has developed as a result of various activities"". ""The study and presentation of mathematical ideas of traditional peoples"". ""Any form of cultural knowledge or social activity characteristic of a social group and/or cultural group that can be recognized by other groups such as Western anthropologists, but not necessarily by the group of origin, as mathematical knowledge or mathematica" https://en.wikipedia.org/wiki/Flexible%20organic%20light-emitting%20diode,"A flexible organic light-emitting diode (FOLED) is a type of organic light-emitting diode (OLED) incorporating a flexible plastic substrate on which the electroluminescent organic semiconductor is deposited. This enables the device to be bent or rolled while still operating. Currently the focus of research in industrial and academic groups, flexible OLEDs form one method of fabricating a rollable display. Technical details and applications An OLED emits light due to the electroluminescence of thin films of organic semiconductors approximately 100 nm thick. Regular OLEDs are usually fabricated on a glass substrate, but by replacing glass with a flexible plastic such as polyethylene terephthalate (PET) among others, OLEDs can be made both bendable and lightweight. Such materials may not be suitable for comparable devices based on inorganic semiconductors due to the need for lattice matching and the high temperature fabrication procedure involved. In contrast, flexible OLED devices can be fabricated by deposition of the organic layer onto the substrate using a method derived from inkjet printing, allowing the inexpensive and roll-to-roll fabrication of printed electronics. Flexible OLEDs may be used in the production of rollable displays, electronic paper, or bendable displays which can be integrated into clothing, wallpaper or other curved surfaces. Prototype displays have been exhibited by companies such as Sony, which are capable of being rolled around the width of a pencil. Disadvantages Both flexible substrate itself as well as the process of bending the device introduce stress into the materials. There may be residual stress from the deposition of layers onto a flexible substrate, thermal stresses due to the different coefficient of thermal expansion of materials in the device, in addition to the external stress from the bending of the device. Stress introduced into the organic layers may lower the efficiency or brightness of the device as it is deformed" https://en.wikipedia.org/wiki/Embedded%20Java,"Embedded Java refers to versions of the Java program language that are designed for embedded systems. Since 2010 embedded Java implementations have come closer to standard Java, and are now virtually identical to the Java Standard Edition. Since Java 9 customization of the Java Runtime through modularization removes the need for specialized Java profiles targeting embedded devices. History Although in the past some differences existed between embedded Java and traditional PC based Java, the only difference now is that embedded Java code in these embedded systems is mainly contained in constrained memory, such as flash memory. A complete convergence has taken place since 2010, and now Java software components running on large systems can run directly with no recompilation at all on design-to-cost mass-production devices (such as consumers, industrial, white goods, healthcare, metering, smart markets in general) CORE embedded Java API for a unified Embedded Java ecosystem In order for a software component to run on any Java system, it must target the core minimal API provided by the different providers of the embedded Java ecosystem. Companies share the same eight packages of pre-written programs. The packages (java.lang, java.io, java.util, ... ) form the CORE Embedded Java API, which means that embedded programmers using the Java language can use them in order to make any worthwhile use of the Java language. Old distinctions between SE embedded API and ME embedded API from ORACLE Java SE embedded is based on desktop Java Platform, Standard Edition. It is designed to be used on systems with at least 32 MB of RAM, and can work on Linux ARM, x86, or Power ISA, and Windows XP and Windows XP Embedded architectures. Java ME embedded used to be based on the Connected Device Configuration subset of Java Platform, Micro Edition. It is designed to be used on systems with at least 8 MB of RAM, and can work on Linux ARM, PowerPC, or MIPS architecture. See also Excelsio" https://en.wikipedia.org/wiki/Tutte%20homotopy%20theorem,"In mathematics, the Tutte homotopy theorem, introduced by , generalises the concept of ""path"" from graphs to matroids, and states roughly that closed paths can be written as compositions of elementary closed paths, so that in some sense they are homotopic to the trivial closed path. Statement A matroid on a set Q is specified by a class of non-empty subsets M of Q, called circuits, such that no element of M contains another, and if X and Y are in M, a is in X and Y, b is in X but not in Y, then there is some Z in M containing b but not a and contained in X∪Y. The subsets of Q that are unions of circuits are called flats (this is the language used in Tutte's original paper, however in modern usage the flats of a matroid mean something different). The elements of M are called 0-flats, the minimal non-empty flats that are not 0-flats are called 1-flats, the minimal nonempty flats that are not 0-flats or 1-flats are called 2-flats, and so on. A path is a finite sequence of 0-flats such that any two consecutive elements of the path lie in some 1-flat. An elementary path is one of the form (X,Y,X), or (X,Y,Z,X) with X,Y,Z all lying in some 2-flat. Two paths P and Q such that the last 0-flat of P is the same as the first 0-flat of Q can be composed in the obvious way to give a path PQ. Two paths are called homotopic if one can be obtained from the other by the operations of adding or removing elementary paths inside a path, in other words changing a path PR to PQR or vice versa, where Q is elementary. A weak form of Tutte's homotopy theorem states that any closed path is homotopic to the trivial path. A stronger form states a similar result for paths not meeting certain ""convex"" subsets." https://en.wikipedia.org/wiki/Comparison%20of%20vector%20algebra%20and%20geometric%20algebra,"Geometric algebra is an extension of vector algebra, providing additional algebraic structures on vector spaces, with geometric interpretations. Vector algebra uses all dimensions and signatures, as does geometric algebra, notably 3+1 spacetime as well as 2 dimensions. Basic concepts and operations Geometric algebra (GA) is an extension or completion of vector algebra (VA). The reader is herein assumed to be familiar with the basic concepts and operations of VA and this article will mainly concern itself with operations in the GA of 3D space (nor is this article intended to be mathematically rigorous). In GA, vectors are not normally written boldface as the meaning is usually clear from the context. The fundamental difference is that GA provides a new product of vectors called the ""geometric product"". Elements of GA are graded multivectors: scalars are grade 0, usual vectors are grade 1, bivectors are grade 2 and the highest grade (3 in the 3D case) is traditionally called the pseudoscalar and designated . The ungeneralized 3D vector form of the geometric product is: that is the sum of the usual dot (inner) product and the outer (exterior) product (this last is closely related to the cross product and will be explained below). In VA, entities such as pseudovectors and pseudoscalars need to be bolted on, whereas in GA the equivalent bivector and pseudovector respectively exist naturally as subspaces of the algebra. For example, applying vector calculus in 2 dimensions, such as to compute torque or curl, requires adding an artificial 3rd dimension and extending the vector field to be constant in that dimension, or alternately considering these to be scalars. The torque or curl is then a normal vector field in this 3rd dimension. By contrast, geometric algebra in 2 dimensions defines these as a pseudoscalar field (a bivector), without requiring a 3rd dimension. Similarly, the scalar triple product is ad hoc, and can instead be expressed uniformly using the ex" https://en.wikipedia.org/wiki/Kepler%E2%80%93Bouwkamp%20constant,"In plane geometry, the Kepler–Bouwkamp constant (or polygon inscribing constant) is obtained as a limit of the following sequence. Take a circle of radius 1. Inscribe a regular triangle in this circle. Inscribe a circle in this triangle. Inscribe a square in it. Inscribe a circle, regular pentagon, circle, regular hexagon and so forth. The radius of the limiting circle is called the Kepler–Bouwkamp constant. It is named after Johannes Kepler and , and is the inverse of the polygon circumscribing constant. Numerical value The decimal expansion of the Kepler–Bouwkamp constant is The natural logarithm of the Kepler-Bouwkamp constant is given by where is the Riemann zeta function. If the product is taken over the odd primes, the constant is obtained ." https://en.wikipedia.org/wiki/Gajski%E2%80%93Kuhn%20chart,"The Gajski–Kuhn chart (or Y diagram) depicts the different perspectives in VLSI hardware design. Mostly, it is used for the development of integrated circuits. Daniel Gajski and Robert Kuhn developed it in 1983. In 1985, Robert Walker and Donald Thomas refined it. According to this model, the development of hardware is perceived within three domains that are depicted as three axis and produce a Y. Along these axis, the abstraction levels that describe the degree of abstraction. The outer shells are generalisations, the inner ones refinements of the same subject. The issue in hardware development is most often a top-down design problem. This is perceived by the three domains of behaviour, structure, and the layout that goes top-down to more detailed abstraction levels. The designer can select one of the perspectives and then switch from one view to another. Generally, the design process is not following a specific sequence in this diagram. On the system level, basic properties of an electronic system are determined. For the behavioural description, block diagrams are used by making abstractions of signals and their time response. Blocks used in the structure domain are CPUs, memory chip, etc. The algorithmic level is defined by the definition of concurrent algorithms (signals, loops, variables, assignments). In the structural domain, blocks like ALUs are in use. The register-transfer level (RTL) is a more detailed abstraction level on which the behaviour between communicating registers and logic units is described. Here, data structures and data flows are defined. In the geometric view, the design step of the floorplan is located. The logical level is described in the behaviour perspective by boolean equations. In the structural view, this is displayed with gates and flip-flops. In the geometric domain, the logical level is described by standard cells. The behaviour of the circuit level is described by mathematics using differential equations or logical equa" https://en.wikipedia.org/wiki/Taphotaxon,"A taphotaxon (from the Greek ταφος, taphos meaning burial and ταξις, taxis meaning ordering) is an invalid taxon based on fossils remains that have been altered in a characteristic way during burial and diagenesis. The fossils so altered have distinctive characteristics that make them appear to be a new taxon, but these characteristics are spurious and do not reflect any significant taxonomic distinction from an existing fossil taxon. The term was first proposed by Spencer G. Lucas in 2001, who particularly applied it to spurious ichnotaxons, but it has since been applied to body fossils such as Nuia (interpreted as cylindrical oncolites formed around filamentous cyanobacteria) or Ivanovia (thought to be a taphotaxon of Anchicondium or Eugonophyllum); conulariids, and crustaceans. In his original definition of the term, Lucas emphasized that he was not seeking to create a new field of taphotaxonomy. The term is intended simply as a useful description of a particular type of invalid taxon. It should not be used indiscriminately, particularly with ichnotaxons, where the fact that an ichnotaxon derives part of its morphology from taphonomic processes may not always render it an invalid ichnotaxon." https://en.wikipedia.org/wiki/Legendre%20transformation,"In mathematics, the Legendre transformation (or Legendre transform), first introduced by Adrien-Marie Legendre in 1787 when studying the minimal surface problem, is an involutive transformation on real-valued functions that are convex on a real variable. Specifically, if a real-valued multivariable function is convex on one of its independent real variables, then the Legendre transform with respect to this variable is applicable to the function. In physical problems, it is used to convert functions of one quantity (such as position, pressure, or temperature) into functions of the conjugate quantity (momentum, volume, and entropy, respectively). In this way, it is commonly used in classical mechanics to derive the Hamiltonian formalism out of the Lagrangian formalism (or vice versa) and in thermodynamics to derive the thermodynamic potentials, as well as in the solution of differential equations of several variables. For sufficiently smooth functions on the real line, the Legendre transform of a function can be specified, up to an additive constant, by the condition that the functions' first derivatives are inverse functions of each other. This can be expressed in Euler's derivative notation as where is an operator of differentiation, represents an argument or input to the associated function, is an inverse function such that , or equivalently, as and in Lagrange's notation. The generalization of the Legendre transformation to affine spaces and non-convex functions is known as the convex conjugate (also called the Legendre–Fenchel transformation), which can be used to construct a function's convex hull. Definition Let be an interval, and a convex function; then the Legendre transform of is the function defined by where denotes the supremum over , e.g., in is chosen such that is maximized at each , or is such that as a bounded value throughout exists (e.g., when is a linear function). The transform is always well-defined when is convex. Th" https://en.wikipedia.org/wiki/Lemniscate%20constant,"In mathematics, the lemniscate constant is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of for the circle. Equivalently, the perimeter of the lemniscate is . The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. The symbol is a cursive variant of ; see Pi § Variant pi. Gauss's constant, denoted by G, is equal to . John Todd named two more lemniscate constants, the first lemniscate constant and the second lemniscate constant . Sometimes the quantities or are referred to as the lemniscate constant. History Gauss's constant is named after Carl Friedrich Gauss, who calculated it via the arithmetic–geometric mean as . By 1799, Gauss had two proofs of the theorem that where is the lemniscate constant. The lemniscate constant and first lemniscate constant were proven transcendental by Theodor Schneider in 1937 and the second lemniscate constant and Gauss's constant were proven transcendental by Theodor Schneider in 1941. In 1975, Gregory Chudnovsky proved that the set is algebraically independent over , which implies that and are algebraically independent as well. But the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact, Forms Usually, is defined by the first equality below. where is the complete elliptic integral of the first kind with modulus , is the beta function, is the gamma function and is the Riemann zeta function. The lemniscate constant can also be computed by the arithmetic–geometric mean , Moreover, which is analogous to where is the Dirichlet beta function and is the Riemann zeta function. Gauss's constant is typically defined as the reciprocal of the arithmetic–geometric mean of 1 and the square root of 2, after his calculation of published in 1800: Gauss's constant is equal t" https://en.wikipedia.org/wiki/Classification%20of%20discontinuities,"Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous. If a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set, a dense set, or even the entire domain of the function. The oscillation of a function at a point quantifies these discontinuities as follows: in a removable discontinuity, the distance that the value of the function is off by is the oscillation; in a jump discontinuity, the size of the jump is the oscillation (assuming that the value at the point lies between these limits of the two sides); in an essential discontinuity, oscillation measures the failure of a limit to exist; the limit is constant. A special case is if the function diverges to infinity or minus infinity, in which case the oscillation is not defined (in the extended real numbers, this is a removable discontinuity). Classification For each of the following, consider a real valued function of a real variable defined in a neighborhood of the point at which is discontinuous. Removable discontinuity Consider the piecewise function The point is a removable discontinuity. For this kind of discontinuity: The one-sided limit from the negative direction: and the one-sided limit from the positive direction: at both exist, are finite, and are equal to In other words, since the two one-sided limits exist and are equal, the limit of as approaches exists and is equal to this same value. If the actual value of is not equal to then is called a . This discontinuity can be removed to make continuous at or more precisely, the function is continuous at The term removable discontinuity is sometimes broadened to include a removable singularity, in which the limits in both directions exist and are equal, while the function is undefined at the point This use is an abuse of terminology b" https://en.wikipedia.org/wiki/Bioclimatology,"Bioclimatology is the interdisciplinary field of science that studies the interactions between the biosphere and the Earth's atmosphere on time scales of the order of seasons or longer (in contrast to biometeorology). Examples of relevant processes Climate processes largely control the distribution, size, shape and properties of living organisms on Earth. For instance, the general circulation of the atmosphere on a planetary scale broadly determines the location of large deserts or the regions subject to frequent precipitation, which, in turn, greatly determine which organisms can naturally survive in these environments. Furthermore, changes in climates, whether due to natural processes or to human interferences, may progressively modify these habitats and cause overpopulation or extinction of indigenous species. The biosphere, for its part, and in particular continental vegetation, which constitutes over 99% of the total biomass, has played a critical role in establishing and maintaining the chemical composition of the Earth's atmosphere, especially during the early evolution of the planet (See History of Earth for more details on this topic). Currently, the terrestrial vegetation exchanges some 60 billion tons of carbon with the atmosphere on an annual basis (through processes of carbon fixation and carbon respiration), thereby playing a critical role in the carbon cycle. On a global and annual basis, small imbalances between these two major fluxes, as do occur through changes in land cover and land use, contribute to the current increase in atmospheric carbon dioxide." https://en.wikipedia.org/wiki/Analog%20device,"Analog devices are a combination of both analog machine and analog media that can together measure, record, reproduce, receive or broadcast continuous information, for example, the almost infinite number of grades of transparency, voltage, resistance, rotation, or pressure. In theory, the continuous information in an analog signal has an infinite number of possible values with the only limitation on resolution being the accuracy of the analog device. Analog media are materials with analog properties, such as photographic film, which are used in analog devices, such as cameras. Example devices Non-electrical There are notable non-electrical analog devices, such as some clocks (sundials, water clocks), the astrolabe, slide rules, the governor of a steam engine, the planimeter (a simple device that measures the surface area of a closed shape), Kelvin's mechanical tide predictor, acoustic rangefinders, servomechanisms (e.g. the thermostat), a simple mercury thermometer, a weighing scale, and the speedometer. Electrical The telautograph is an analogue precursor to the modern fax machine. It transmits electrical impulses recorded by potentiometers to stepping motors attached to a pen, thus being able to reproduce a drawing or signature made by the sender at the receiver's station. It was the first such device to transmit drawings to a stationary sheet of paper; previous inventions in Europe used rotating drums to make such transmissions. An analog synthesizer is a synthesizer that uses analog circuits and analog computer techniques to generate sound electronically. The analog television encodes television and transports the picture and sound information as an analog signal, that is, by varying the amplitude and/or frequencies of the broadcast signal. All systems preceding digital television, such as NTSC, PAL, and SECAM are analog television systems. An analog computer is a form of computer that uses electrical, mechanical, or hydraulic phenomena to model the probl" https://en.wikipedia.org/wiki/Bootstrapping%20%28electronics%29,"In the field of electronics, a technique where part of the output of a system is used at startup can be described as bootstrapping. A bootstrap circuit is one where part of the output of an amplifier stage is applied to the input, so as to alter the input impedance of the amplifier. When applied deliberately, the intention is usually to increase rather than decrease the impedance. In the domain of MOSFET circuits, bootstrapping is commonly used to mean pulling up the operating point of a transistor above the power supply rail. The same term has been used somewhat more generally for dynamically altering the operating point of an operational amplifier (by shifting both its positive and negative supply rail) in order to increase its output voltage swing (relative to the ground). In the sense used in this paragraph, bootstrapping an operational amplifier means ""using a signal to drive the reference point of the op-amp's power supplies"". A more sophisticated use of this rail bootstrapping technique is to alter the non-linear C/V characteristic of the inputs of a JFET op-amp in order to decrease its distortion. Input impedance In analog circuit designs, a bootstrap circuit is an arrangement of components deliberately intended to alter the input impedance of a circuit. Usually it is intended to increase the impedance, by using a small amount of positive feedback, usually over two stages. This was often necessary in the early days of bipolar transistors, which inherently have quite a low input impedance. Because the feedback is positive, such circuits can suffer from poor stability and noise performance compared to ones that don't bootstrap. Negative feedback may alternatively be used to bootstrap an input impedance, causing the apparent impedance to be reduced. This is seldom done deliberately, however, and is normally an unwanted result of a particular circuit design. A well-known example of this is the Miller effect, in which an unavoidable feedback capacitance" https://en.wikipedia.org/wiki/Atwater%20system,"The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but few alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation. Derivation Available energy (as used by Atwater) is equivalent to the modern usage of the term metabolisable energy (ME). In most studies on humans, losses in secretions and gases are ignored. The gross energy (GE) of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components – protein (GEp), fat (GEf) and carbohydrate (GEcho) (by difference) in the proximate system. Atwater considered the energy value of feces in the same way. By measuring coefficients of availability or in modern terminology apparent digestibility, Atwater derived a system for calculating faecal energy losses. where Dp, Df, and Dcho are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as for the constituent in question. Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcal/g (33 kJ/g) urinary nitrogen and thus his equation for metabolisable energy became Gross energy values Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets who" https://en.wikipedia.org/wiki/Actel%20SmartFusion,"SmartFusion is a family of microcontrollers with an integrated FPGA of Actel. The device includes an ARM Cortex-M3 hard processor core (with up to 512kB of flash and 64kB of RAM) and analog peripherals such as a multi-channel ADC and DACs in addition to their flash-based FPGA fabric. Models Development Hardware Actel also sells two development boards that include an SmartFusion chip. One is the SmartFusion Evaluation Kit which is a low cost board with an SmartFusion A2F200 and sold for $99. Another is the SmartFusion Development Kit which is a fully featured board with an SmartFusion A2F500 and is sold for $999 . Development tools Documentation The amount of documentation for all ARM chips is daunting, especially for newcomers. The documentation for microcontrollers from past decades would easily be inclusive in a single document, but as chips have evolved so has the documentation grown. The total documentation is especially hard to grasp for all ARM chips since it consists of documents from the IC manufacturer (Actel) and documents from CPU core vendor (ARM Holdings). A typical top-down documentation tree is: manufacturer website, manufacturer marketing slides, manufacturer datasheet for the exact physical chip, manufacturer detailed reference manual that describes common peripherals and aspects of a physical chip family, ARM core generic user guide, ARM core technical reference manual, ARM architecture reference manual that describes the instruction set(s). SmartFusion documentation tree (top to bottom) SmartFusion website. SmartFusion marketing slides. SmartFusion datasheets. SmartFusion reference manuals. ARM core website. ARM core generic user guide. ARM core technical reference manual. ARM architecture reference manual. Actel has additional documents, such as: evaluation board user manuals, application notes, getting started guides, software library documents, errata, and more. See External Links section for links to official STM32 and ARM d" https://en.wikipedia.org/wiki/Traffic%20flow%20%28computer%20networking%29,"In packet switching networks, traffic flow, packet flow or network flow is a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. RFC 2722 defines traffic flow as ""an artificial logical equivalent to a call or connection."" RFC 3697 defines traffic flow as ""a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that the source desires to label as a flow. A flow could consist of all packets in a specific transport connection or a media stream. However, a flow is not necessarily 1:1 mapped to a transport connection."" Flow is also defined in RFC 3917 as ""a set of IP packets passing an observation point in the network during a certain time interval."" Packet flow temporal efficiency can be affected by one-way delay (OWD) that is described as a combination of the following components: Processing delay (the time taken to process a packet in a network node) Queuing delay (the time a packet waits in a queue until it can be transmitted) Transmission delay (the amount of time necessary to push all the packet into the wire) Propagation delay (amount of time it takes the signal’s header to travel from the sender to the receiver) Utility for network administration Packets from one flow need to be handled differently from others, by means of separate queues in switches, routers and network adapters, to achieve traffic shaping, policing, fair queueing or quality of service. It is also a concept used in Queueing Network Analyzers (QNAs) or in packet tracing. Applied to Internet routers, a flow may be a host-to-host communication path, or a socket-to-socket communication identified by a unique combination of source and destination addresses and port numbers, together with transport protocol (for example, UDP or TCP). In the TCP case, a flow may be a virtual circuit, also known as a virtual connection or a byte stream. In packet switches, the fl" https://en.wikipedia.org/wiki/Avionics,"Avionics (a blend of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform. History The term ""avionics"" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of ""aviation electronics"". Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy, so they required two-seat aircraft with a second crewman to tap on a telegraph key to spell out messages by Morse code. During World War I, AM voice two way radio sets were made possible in 1917 by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying. Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnet" https://en.wikipedia.org/wiki/Abacus,"The abacus (: abaci or abacuses), also called a counting frame, is a hand-operated calculating tool of unknown origin used since ancient times in the ancient Near East, Europe, China, and Russia, millennia before the adoption of the Hindu-Arabic numeral system. The abacus consists of a two-dimensional array of slidable beads (or similar objects). In their earliest designs, the beads could be loose on a flat surface or sliding in grooves. Later the beads were made to slide on rods and built into a frame, allowing faster manipulation. Each rod typically represents one digit of a multi-digit number laid out using a positional numeral system such as base ten (though some cultures used different numerical bases). Roman and East Asian abacuses use a system resembling bi-quinary coded decimal, with a top deck (containing one or two beads) representing fives and a bottom deck (containing four or five beads) representing ones. Natural numbers are normally used, but some allow simple fractional components (e.g. , , and in Roman abacus), and a decimal point can be imagined for fixed-point arithmetic. Any particular abacus design supports multiple methods to perform calculations, including addition, subtraction, multiplication, division, and square and cube roots. The beads are first arranged to represent a number, then are manipulated to perform a mathematical operation with another number, and their final position can be read as the result (or can be used as the starting number for subsequent operations). In the ancient world, abacuses were a practical calculating tool. Although calculators and computers are commonly used today instead of abacuses, abacuses remain in everyday use in some countries. The abacus has an advantage of not requiring a writing implement and paper (needed for algorism) or an electric power source. Merchants, traders, and clerks in some parts of Eastern Europe, Russia, China, and Africa use abacuses. The abacus remains in common use as a scoring s" https://en.wikipedia.org/wiki/Univariate,"In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials. In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the ""variable"": a univariate time series is the series of values over time of a single quantity. Correspondingly, a ""multivariate time series"" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions. In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally. Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered. See also Arity Bivariate (disambiguation) Multivariate (disambiguation) Univariate analysis Univariate binary model Univariate distribution" https://en.wikipedia.org/wiki/Instinet,"Instinet Incorporated is an institutional, agency-model broker that also serves as the independent equity trading arm of its parent, Nomura Group. It executes trades for asset management firms, hedge funds, insurance companies, mutual funds and pension funds. Headquartered in New York City, the company provides sales trading services and trading technologies such as the Newport EMS, algorithms, trade cost analytics, commission management, independent research and dark pools. However, Instinet is best known for being the first off-exchange trading alternatives, with its ""green screen"" terminals prevalent in the 1980s and 1990s, and as the founder of electronic communication networks, Chi-X Europe and Chi-X Global. According to industry research group Markit, in 2015 Instinet was the 3rd-largest cash equities broker in Europe. History Early history Instinet was founded by Jerome M. Pustilnik and Herbert R. Behrens and was incorporated in 1969 as Institutional Networks Corp. The founders aimed to compete with the New York Stock Exchange by means of computer links between major institutions, such as banks, mutual funds, and insurance companies, with no delays or intervening specialists. Through the Instinet system, which went live in December 1969, the company provided computer services and a communications network for the automated buying and selling of equity securities on an anonymous, confidential basis. Uptake of the platform was slow through the 1970s, and in 1983 Instinet turned to William A. ""Bill"" Lupien, a former Pacific Stock Exchange specialist, to run the company. Bill Lupien decided to market the system more aggressively to the broker community, rather than focus exclusively on the buyside as his predecessors had. To expand its market, Lupien brought on board Fredric W. Rittereiser, formerly of Troster Singer and the Sherwood Group, as President and Chief Operating Officer and David N. Rosensaft as Vice President (later SVP) of New Products Developme" https://en.wikipedia.org/wiki/CMS-2,"CMS-2 is an embedded systems programming language used by the United States Navy. It was an early attempt to develop a standardized high-level computer programming language intended to improve code portability and reusability. CMS-2 was developed primarily for the US Navy’s tactical data systems (NTDS). CMS-2 was developed by RAND Corporation in the early 1970s and stands for ""Compiler Monitor System"". The name ""CMS-2"" is followed in literature by a letter designating the type of target system. For example, CMS-2M targets Navy 16-bit processors, such as the AN/AYK-14. History CMS-2 was developed for FCPCPAC (Fleet Computer Programming Center - Pacific) in San Diego, CA. It was implemented by Computer Sciences Corporation in 1968 with design assistance from Intermetrics. The language continued to be developed, eventually supporting a number of computers including the AN/UYK-7 and AN/UYK-43 and UYK-20 and UYK-44 computers. Language features CMS-2 was designed to encourage program modularization, permitting independent compilation of portions of a total system. The language is statement oriented. The source is free-form and may be arranged for programming convenience. Data types include fixed-point, floating-point, boolean, character and status. Direct reference to, and manipulation of character and bit strings is permitted. Symbolic machine code may be included, known as direct code. Program structure A CMS-2 program is composed of statements. Statements are made up of symbols separated by delimiters. The categories of symbols include operators, identifiers, and constants. The operators are language primitives assigned by the compiler for specific operations or definitions in a program. Identifiers are unique names assigned by the programmer to data units, program elements and statement labels. Constants are known values that may be numeric, Hollerith strings, status values or Boolean. CMS-2 statements are free form and terminated by a dollar sign. A statemen" https://en.wikipedia.org/wiki/Liana,"A liana is a long-stemmed, woody vine that is rooted in the soil at ground level and uses trees, as well as other means of vertical support, to climb up to the canopy in search of direct sunlight. The word liana does not refer to a taxonomic grouping, but rather a habit of plant growth – much like tree or shrub. It comes from standard French liane, itself from an Antilles French dialect word meaning to sheave. Ecology Lianas are characteristic of tropical moist broadleaf forests (especially seasonal forests), but may be found in temperate rainforests and temperate deciduous forests. There are also temperate lianas, for example the members of the Clematis or Vitis (wild grape) genera. Lianas can form bridges amidst the forest canopy, providing arboreal animals with paths across the forest. These bridges can protect weaker trees from strong winds. Lianas compete with forest trees for sunlight, water and nutrients from the soil. Forests without lianas grow 150% more fruit; trees with lianas have twice the probability of dying. Some lianas attain to great length, such as Bauhinia sp. in Surinam which has grown as long as 600 meters. Hawkins has accepted a length of 1.5 km for an Entada phaseoloides. The longest monocot liana is Calamus manan (or Calamus ornatus) at exactly 240 meters. Lianas may be found in many different plant families. One way of distinguishing lianas from trees and shrubs is based on the stiffness, specifically, the Young's modulus of various parts of the stem. Trees and shrubs have young twigs and smaller branches which are quite flexible and older growth such as trunks and large branches which are stiffer. A liana often has stiff young growths and older, more flexible growth at the base of the stem. Habitat Lianas compete intensely with trees, greatly reducing tree growth and tree reproduction, greatly increasing tree mortality, preventing tree seedlings from establishing, altering the course of regeneration in forests, and ultimately affecti" https://en.wikipedia.org/wiki/UPC%20and%20NPC,"Usage Parameter Control (UPC) and Network Parameter Control (NPC) are functions that may be performed in a computer network. UPC may be performed at the input to a network ""to protect network resources from malicious as well as unintentional misbehaviour"". NPC is the same and done for the same reasons as UPC, but at the interface between two networks. UPC and NPC may involve traffic shaping, where traffic is delayed until it conforms to the expected levels and timing, or traffic policing, where non-conforming traffic is either discarded immediately, or reduced in priority so that it may be discarded downstream in the network if it would cause or add to congestion. Uses In ATM The actions for UPC and NPC in the ATM protocol are defined in ITU-T Recommendation I.371 Traffic control and congestion control in B ISDN and the ATM Forum's User-Network Interface (UNI) Specification. These provide a conformance definition, using a form of the leaky bucket algorithm called the Generic Cell Rate Algorithm (GCRA), which specifies how cells are checked for conformance with a cell rate, or its reciprocal emission interval, and jitter tolerance: either a Cell Delay Variation tolerance (CDVt) for testing conformance to the Peak Cell Rate (PCR) or a Burst Tolerance or Maximum Burst Size (MBS) for testing conformance to the Sustainable Cell Rate (SCR). UPC and NPC define a Maximum Burst Size (MBS) parameter on the average or Sustained Cell Rate (SCR), and a Cell Delay Variation tolerance (CDVt) on the Peak Cell Rate (PCR) at which the bursts are transmitted. This MBS can be derived from or used to derive the maximum variation between the arrival time of traffic in the bursts from the time it would arrive at the SCR, i.e. a jitter about that SCR. UPC and NPC are normally performed on a per Virtual Channel (VC) or per Virtual Path (VP) basis, i.e. the intervals are measured between cells bearing the same virtual channel identifier (VCI) and or virtual path identifier (VPI). I" https://en.wikipedia.org/wiki/Nutraceutical,"A nutraceutical is a pharmaceutical alternative which claims physiological benefits. In the US, nutraceuticals are largely unregulated, as they exist in the same category as dietary supplements and food additives by the FDA, under the authority of the Federal Food, Drug, and Cosmetic Act. The word ""nutraceutical"" is a portmanteau term, blending the words ""nutrition"" and ""pharmaceutical"". Regulation Nutraceuticals are treated differently in different jurisdictions. Canada Under Canadian law, a nutraceutical can either be marketed as a food or as a drug; the terms ""nutraceutical"" and ""functional food"" have no legal distinction, referring to ""a product isolated or purified from foods that is generally sold in medicinal forms not usually associated with food [and] is demonstrated to have a physiological benefit or provide protection against chronic disease."" United States The term ""nutraceutical"" is not defined by US law. Depending on its ingredients and the claims with which it is marketed, a product is regulated as a drug, dietary supplement, food ingredient, or food. Other sources In the global market, there are significant product quality issues. Nutraceuticals from the international market may claim to use organic or exotic ingredients, yet the lack of regulation may compromise the safety and effectiveness of products. Companies looking to create a wide profit margin may create unregulated products overseas with low-quality or ineffective ingredients. Classification of nutraceuticals Nutraceuticals are products derived from food sources that are purported to provide extra health benefits, in addition to the basic nutritional value found in foods. Depending on the jurisdiction, products may claim to prevent chronic diseases, improve health, delay the aging process, increase life expectancy, or support the structure or function of the body. Dietary supplements In the United States, the Dietary Supplement Health and Education Act (DSHEA) of 1994 defined the t" https://en.wikipedia.org/wiki/Inter%20University%20Center%20for%20Bioscience,"Inter University Centre for Bioscience (IUCB) was established at the School of Life Sciences, Kannur University, Kerala, India, by the Higher Education Department, Government of Kerala, to be a global center of excellence for research in biological sciences. Former Vice-President of India Mohammad Hamid Ansari inaugurated the centre on July 10, 2010. IUCB also have a herbal garden in its premises named after E.K. Janaki Ammal, renowned ethnobotanist from Thalassery who was the former Director-General of the Botanical Survey of India. The School of Life Sciences together with Inter University Center for Bioscience have active research collaborations with different research Institutes and industries across the country. Research Highlights" https://en.wikipedia.org/wiki/List%20of%20NP-complete%20problems,"This is a list of some of the more commonly known problems that are NP-complete when expressed as decision problems. As there are hundreds of such problems known, this list is in no way comprehensive. Many problems of this type can be found in . Graphs and hypergraphs Graphs occur frequently in everyday applications. Examples include biological or social networks, which contain hundreds, thousands and even billions of nodes in some cases (e.g. Facebook or LinkedIn). 1-planarity 3-dimensional matching Bandwidth problem Bipartite dimension Capacitated minimum spanning tree Route inspection problem (also called Chinese postman problem) for mixed graphs (having both directed and undirected edges). The program is solvable in polynomial time if the graph has all undirected or all directed edges. Variants include the rural postman problem. Clique cover problem Clique problem Complete coloring, a.k.a. achromatic number Cycle rank Degree-constrained spanning tree Domatic number Dominating set, a.k.a. domination number NP-complete special cases include the edge dominating set problem, i.e., the dominating set problem in line graphs. NP-complete variants include the connected dominating set problem and the maximum leaf spanning tree problem. Feedback vertex set Feedback arc set Graph coloring Graph homomorphism problem Graph partition into subgraphs of specific types (triangles, isomorphic subgraphs, Hamiltonian subgraphs, forests, perfect matchings) are known NP-complete. Partition into cliques is the same problem as coloring the complement of the given graph. A related problem is to find a partition that is optimal terms of the number of edges between parts. Grundy number of a directed graph. Hamiltonian completion Hamiltonian path problem, directed and undirected. Graph intersection number Longest path problem Maximum bipartite subgraph or (especially with weighted edges) maximum cut. Maximum common subgraph isomorphism problem Maximum independent set Maximum Induced pat" https://en.wikipedia.org/wiki/Protocol%20engineering,"Protocol engineering is the application of systematic methods to the development of communication protocols. It uses many of the principles of software engineering, but it is specific to the development of distributed systems. History When the first experimental and commercial computer networks were developed in the 1970s, the concept of protocols was not yet well developed. These were the first distributed systems. In the context of the newly adopted layered protocol architecture (see OSI model), the definition of the protocol of a specific layer should be such that any entity implementing that specification in one computer would be compatible with any other computer containing an entity implementing the same specification, and their interactions should be such that the desired communication service would be obtained. On the other hand, the protocol specification should be abstract enough to allow different choices for the implementation on different computers. It was recognized that a precise specification of the expected service provided by the given layer was important. It is important for the verification of the protocol, which should demonstrate that the communication service is provided if both protocol entities implement the protocol specification correctly. This principle was later followed during the standardization of the OSI protocol stack, in particular for the transport layer. It was also recognized that some kind of formalized protocol specification would be useful for the verification of the protocol and for developing implementations, as well as test cases for checking the conformance of an implementation against the specification. While initially mainly finite-state machine were used as (simplified) models of a protocol entity, in the 1980s three formal specification languages were standardized, two by ISO and one by ITU. The latter, called SDL, was later used in industry and has been merged with UML state machines. Principles The followi" https://en.wikipedia.org/wiki/Relative%20locality,"Relative locality is a proposed physical phenomenon in which different observers would disagree on whether two space-time events are coincident. This is in contrast to special relativity and general relativity in which different observers may disagree on whether two distant events occur at the same time but if an observer infers that two events are at the same spacetime position then all observers will agree. When a light signal exchange procedure is used to infer spacetime coordinates of distant events from the travel time of photons, information about the photon's energy is discarded with the assumption that the frequency of light doesn't matter. It is also usually assumed that distant observers construct the same spacetime. This assumption of absolute locality implies that momentum space is flat. However research into quantum gravity has indicated that momentum space might be curved which would imply relative locality. To regain an absolute arena for invariance one would combine spacetime and momentum space into a phase space." https://en.wikipedia.org/wiki/Power%20cycling,"Power cycling is the act of turning a piece of equipment, usually a computer, off and then on again. Reasons for power cycling include having an electronic device reinitialize its set of configuration parameters or recover from an unresponsive state of its mission critical functionality, such as in a crash or hang situation. Power cycling can also be used to reset network activity inside a modem. It can also be among the first steps for troubleshooting an issue. Overview Power cycling can be done manually, usually using a switch on the device to be cycled; automatically, through some type of device, system, or network management monitoring and control; or by remote control; through a communication channel. In the data center environment, remote control power cycling can usually be done through a power distribution unit, over TCP/IP. In the home environment, this can be done through home automation powerline communications or IP protocols. Most Internet Service Providers publish a ""how-to"" on their website showing their customers the correct procedure to power cycle their devices. Power cycling is a standard diagnostic procedure usually performed first when the computer freezes. However, frequently power cycling a computer can cause thermal stress. Reset has an equal effect on the software but may be less problematic for the hardware as power is not interrupted. Historical uses On all Apollo missions to the moon, the landing radar was required to acquire the surface before a landing could be attempted. But on Apollo 14, the landing radar was unable to lock on. Mission control told the astronauts to cycle the power. They did, the radar locked on just in time, and the landing was completed. During the Rosetta mission to comet 67P/Churyumov–Gerasimenko, the Philae lander did not return the expected telemetry on awakening after arrival at the comet. The problem was diagnosed as ""somehow a glitch in the electronics"", engineers cycled the power, and the lander aw" https://en.wikipedia.org/wiki/Sonic%20artifact,"In sound and music production, sonic artifact, or simply artifact, refers to sonic material that is accidental or unwanted, resulting from the editing or manipulation of a sound. Types Because there are always technical restrictions in the way a sound can be recorded (in the case of acoustic sounds) or designed (in the case of synthesised or processed sounds), sonic errors often occur. These errors are termed artifacts (or sound/sonic artifacts), and may be pleasing or displeasing. A sonic artifact is sometimes a type of digital artifact, and in some cases is the result of data compression (not to be confused with dynamic range compression, which also may create sonic artifacts). Often an artifact is deliberately produced for creative reasons. For example to introduce a change in timbre of the original sound or to create a sense of cultural or stylistic context. A well-known example is the overdriving of an electric guitar or electric bass signal to produce a clipped, distorted guitar tone or fuzz bass. Editing processes that deliberately produce artifacts often involve technical experimentation. A good example of the deliberate creation of sonic artifacts is the addition of grainy pops and clicks to a recent recording in order to make it sound like a vintage vinyl record. Flanging and distortion were originally regarded as sonic artifacts; as time passed they became a valued part of pop music production methods. Flanging is added to electric guitar and keyboard parts. Other magnetic tape artifacts include wow, flutter, saturation, hiss, noise, and print-through. It is valid to consider the genuine surface noise such as pops and clicks that are audible when a vintage vinyl recording is played back or recorded onto another medium as sonic artifacts, although not all sonic artifacts must contain in their meaning or production a sense of ""past"", more so a sense of ""by-product"". Other vinyl record artifacts include turntable rumble, ticks, crackles and groove ec" https://en.wikipedia.org/wiki/Very%20High%20Speed%20Integrated%20Circuit%20Program,"The Very High Speed Integrated Circuit (VHSIC) Program was a United States Department of Defense (DOD) research program that ran from 1980 to 1990. Its mission was to research and develop very high-speed integrated circuits for the United States Armed Forces. VHSIC was launched in 1980 as a joint tri-service (Army/Navy/Air Force) program. The program led to advances in integrated circuit materials, lithography, packaging, testing, and algorithms, and created numerous computer-aided design (CAD) tools. A well-known part of the program's contribution is VHDL (VHSIC Hardware Description Language), a hardware description language (HDL). The program also redirected the military's interest in GaAs ICs back toward the commercial mainstream of CMOS circuits. More than $1 billion in total was spent for the VHSIC program for silicon integrated circuit technology development. A DARPA project which ran concurrently, the VLSI Project, having begun two years earlier in 1978, contributed BSD Unix, the RISC processor, the MOSIS research design fab, and greatly furthered the Mead and Conway revolution in VLSI design automation. By contrast, the VHSIC program was comparatively less cost-effective for the funds invested over a contemporaneous time frame, though the projects had different final objectives and are not entirely comparable for that reason. The program didn't succeed at producing high-speed ICs as commercial processors by that time were well ahead of what the DOD expected to produce." https://en.wikipedia.org/wiki/Biological%20system,"A biological system is a complex network which connects several biologically relevant entities. Biological organization spans several scales and are determined based different structures depending on what the system is. Examples of biological systems at the macro scale are populations of organisms. On the organ and tissue scale in mammals and other animals, examples include the circulatory system, the respiratory system, and the nervous system. On the micro to the nanoscopic scale, examples of biological systems are cells, organelles, macromolecular complexes and regulatory pathways. A biological system is not to be confused with a living system, such as a living organism. Organ and tissue systems These specific systems are widely studied in human anatomy and are also present in many other animals. Respiratory system: the organs used for breathing, the pharynx, larynx, bronchi, lungs and diaphragm. Digestive system: digestion and processing food with salivary glands, oesophagus, stomach, liver, gallbladder, pancreas, intestines, rectum and anus. Cardiovascular system (heart and circulatory system): pumping and channeling blood to and from the body and lungs with heart, blood and blood vessels. Urinary system: kidneys, ureters, bladder and urethra involved in fluid balance, electrolyte balance and excretion of urine. Integumentary system: skin, hair, fat, and nails. Skeletal system: structural support and protection with bones, cartilage, ligaments and tendons. Endocrine system: communication within the body using hormones made by endocrine glands such as the hypothalamus, pituitary gland, pineal body or pineal gland, thyroid, parathyroid and adrenals, i.e., adrenal glands. Lymphatic system: structures involved in the transfer of lymph between tissues and the blood stream; includes the lymph and the nodes and vessels. The lymphatic system includes functions including immune responses and development of antibodies. Immune system: protects the organism from" https://en.wikipedia.org/wiki/Test%20compression,"Test compression is a technique used to reduce the time and cost of testing integrated circuits. The first ICs were tested with test vectors created by hand. It proved very difficult to get good coverage of potential faults, so Design for testability (DFT) based on scan and automatic test pattern generation (ATPG) were developed to explicitly test each gate and path in a design. These techniques were very successful at creating high-quality vectors for manufacturing test, with excellent test coverage. However, as chips got bigger and more complex the ratio of logic to be tested per pin increased dramatically, and the volume of scan test data started causing a significant increase in test time, and required tester memory. This raised the cost of testing. Test compression was developed to help address this problem. When an ATPG tool generates a test for a fault, or a set of faults, only a small percentage of scan cells need to take specific values. The rest of the scan chain is don't care, and are usually filled with random values. Loading and unloading these vectors is not a very efficient use of tester time. Test compression takes advantage of the small number of significant values to reduce test data and test time. In general, the idea is to modify the design to increase the number of internal scan chains, each of shorter length. These chains are then driven by an on-chip decompressor, usually designed to allow continuous flow decompression where the internal scan chains are loaded as the data is delivered to the decompressor. Many different decompression methods can be used. One common choice is a linear finite state machine, where the compressed stimuli are computed by solving linear equations corresponding to internal scan cells with specified positions in partially specified test patterns. Experimental results show that for industrial circuits with test vectors and responses with very low fill rates, ranging from 3% to 0.2%, the test compression" https://en.wikipedia.org/wiki/Fecundity%20selection,"Fecundity selection, also known as fertility selection, is the fitness advantage resulting from selection on traits that increases the number of offspring (i.e. fecundity). Charles Darwin formulated the theory of fecundity selection between 1871 and 1874 to explain the widespread evolution of female-biased sexual size dimorphism (SSD), where females were larger than males. Along with the theories of natural selection and sexual selection, fecundity selection is a fundamental component of the modern theory of Darwinian selection. Fecundity selection is distinct in that large female size relates to the ability to accommodate more offspring, and a higher capacity for energy storage to be invested in reproduction. Darwin's theory of fecundity selection predicts the following: Fecundity depends on variation in female size, which is associated with fitness. Strong fecundity selection favors large female size, which creates asymmetrical female-biased sexual size dimorphism. Although sexual selection and fecundity selection are distinct, it still may be difficult to interpret whether sexual dimorphism in nature is due to fecundity selection, or to sexual selection. Examples of fecundity selection in nature include self-incompatibility flowering plants, where pollen of some potential mates are not effective in forming seed, as well as bird, lizard, fly, and butterfly and moth species that are spread across an ecological gradient. Moreau-Lack's rule Moreau (1944) suggested that in more seasonal environments or higher latitudes, fecundity depends on high mortality. Lack (1954) suggested differential food availability and management across latitudes play a role in offspring and parental fitness. Lack also highlighted that more opportunities for parents to collect food due to an increase in day-length towards the poles is an advantage. This means that moderately higher altitudes provide more successful conditions to produce more offspring. However, extreme day-lengths (" https://en.wikipedia.org/wiki/Algorithmic%20state%20machine,"The algorithmic state machine (ASM) is a method for designing finite state machines (FSMs) originally developed by Thomas E. Osborne at the University of California, Berkeley (UCB) since 1960, introduced to and implemented at Hewlett-Packard in 1968, formalized and expanded since 1967 and written about by Christopher R. Clare since 1970. It is used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but more structured and, thus, easier to understand. An ASM chart is a method of describing the sequential operations of a digital system. ASM method The ASM method is composed of the following steps: 1. Create an algorithm, using pseudocode, to describe the desired operation of the device. 2. Convert the pseudocode into an ASM chart. 3. Design the datapath based on the ASM chart. 4. Create a detailed ASM chart based on the datapath. 5. Design the control logic based on the detailed ASM chart. ASM chart An ASM chart consists of an interconnection of four types of basic elements: state name, state box, decision box, and conditional outputs box. An ASM state, represented as a rectangle, corresponds to one state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box. State Name: The name of the state is indicated inside the circle and the circle is placed in the top left corner or the name is placed without the circle. State Box: The output of the state is indicated inside the rectangle box Decision Box: A diamond indicates that the stated condition/expression is to be tested and the exit path is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for true and false), is used to conditionally transfer between two State Boxes, to another Decision Box, or to a Conditional Output Box. The decision box contains the stated condition expressio" https://en.wikipedia.org/wiki/Thermal%20simulations%20for%20integrated%20circuits,"Miniaturizing components has always been a primary goal in the semiconductor industry because it cuts production cost and lets companies build smaller computers and other devices. Miniaturization, however, has increased dissipated power per unit area and made it a key limiting factor in integrated circuit performance. Temperature increase becomes relevant for relatively small-cross-sections wires, where it may affect normal semiconductor behavior. Besides, since the generation of heat is proportional to the frequency of operation for switching circuits, fast computers have larger heat generation than slow ones, an undesired effect for chips manufacturers. This article summaries physical concepts that describe the generation and conduction of heat in an integrated circuit, and presents numerical methods that model heat transfer from a macroscopic point of view. Generation and transfer of heat Fourier's law At macroscopic level, Fourier's law states a relation between the transmitted heat per unit time per unit area and the gradient of temperature: Where is the thermal conductivity, [W·m−1 K−1]. Joule heating Electronic systems work based on current and voltage signals. Current is the flow of charged particles through the material and these particles (electrons or holes), interact with the lattice of the crystal losing its energy which is released in form of heat. Joule Heating is a predominant mechanism for heat generation in integrated circuits and is an undesired effect in most of the cases. For an ohmic material, it has the form: Where is the current density in [A·m−2], is the specific electric resistivity in [·m] and is the generated heat per unit volume in [W·m−3]. Heat-transfer equation The governing equation of the physics of the heat transfer problem relates the flux of heat in space, its variation in time and the generation of power by the following expression: Where is the thermal conductivity, is the density of the medium, is the s" https://en.wikipedia.org/wiki/Index%20of%20optics%20articles,"Optics is the branch of physics which involves the behavior and properties of light, including its interactions with matter and the construction of instruments that use or detect it. Optics usually describes the behavior of visible, ultraviolet, and infrared light. Because light is an electromagnetic wave, other forms of electromagnetic radiation such as X-rays, microwaves, and radio waves exhibit similar properties. A B C D E F G H I J K L M N O P Q R S T U W Z See also :Category:Optical components :Category:Optical materials" https://en.wikipedia.org/wiki/Trace%20fossil%20classification,"Trace fossils are classified in various ways for different purposes. Traces can be classified taxonomically (by morphology), ethologically (by behavior), and toponomically, that is, according to their relationship to the surrounding sedimentary layers. Except in the rare cases where the original maker of a trace fossil can be identified with confidence, phylogenetic classification of trace fossils is an unreasonable proposition. Taxonomic classification The taxonomic classification of trace fossils parallels the taxonomic classification of organisms under the International Code of Zoological Nomenclature. In trace fossil nomenclature a Latin binomial name is used, just as in animal and plant taxonomy, with a genus and specific epithet. However, the binomial names are not linked to an organism, but rather just a trace fossil. This is due to the rarity of association between a trace fossil and a specific organism or group of organisms. Trace fossils are therefore included in an ichnotaxon separate from Linnaean taxonomy. When referring to trace fossils, the terms ichnogenus and ichnospecies parallel genus and species respectively. The most promising cases of phylogenetic classification are those in which similar trace fossils show details complex enough to deduce the makers, such as bryozoan borings, large trilobite trace fossils such as Cruziana, and vertebrate footprints. However, most trace fossils lack sufficiently complex details to allow such classification. Ethologic classification The Seilacherian System Adolf Seilacher was the first to propose a broadly accepted ethological basis for trace fossil classification. He recognized that most trace fossils are created by animals in one of five main behavioural activities, and named them accordingly: Cubichnia are the traces of organisms left on the surface of a soft sediment. This behaviour may simply be resting as in the case of a starfish, but might also evidence the hiding place of prey, or even the ambus" https://en.wikipedia.org/wiki/Akira%20Yoshizawa,"was a Japanese origamist, considered to be the grandmaster of origami. He is credited with raising origami from a craft to a living art. According to his own estimation made in 1989, he created more than 50,000 models, of which only a few hundred designs were presented as diagrams in his 18 books. Yoshizawa acted as an international cultural ambassador for Japan throughout his career. In 1983, Emperor Hirohito awarded him the Order of the Rising Sun, 5th class, one of the highest honors bestowed in Japan. Life Yoshizawa was born on 14 March 1911, in Kaminokawa, Japan, to the family of a dairy farmer. When he was a child, he took pleasure in teaching himself origami. He moved into a factory job in Tokyo when he was 13 years old. His passion for origami was rekindled in his early 20s, when he was promoted from factory worker to technical draftsman. His new job was to teach junior employees geometry. Yoshizawa used the traditional art of origami to understand and communicate geometrical problems. In 1937, he left factory work to pursue origami full-time. During the next 20 years, he lived in total poverty, earning his living by door-to-door selling of (a Japanese preserved condiment that is usually made of seaweed). During World War II, Yoshizawa served in the army medical corps in Hong Kong. He made origami models to cheer up the sick patients, but eventually fell ill himself and was sent back to Japan. His origami work was creative enough to be included in the 1944 book Origami Shuko, by . However, it was his work for the January 1952 issue of the magazine Asahi Graph that launched his career, which included the 12 zodiac signs commissioned by a magazine. In 1954, his first monograph, Atarashii Origami Geijutsu (New Origami Art) was published. In this work, he established the Yoshizawa–Randlett system of notation for origami folds (a system of symbols, arrows and diagrams), which has become the standard for most paperfolders. The publishing of this book helped " https://en.wikipedia.org/wiki/Kleptoprotein,"A kleptoprotein is a protein which is not encoded in the genome of the organism which uses it, but instead is obtained through diet from a prey organism. Importantly, a kleptoprotein must maintain its function and be mostly or entirely undigested, drawing a distinction from proteins that are digested for nutrition, which become destroyed and non-functional in the process. This phenomenon was first reported in the bioluminescent fish Parapriacanthus, which has specialized light organs adapted towards counter-illumination, but obtains the luciferase enzyme within these organs from bioluminescent ostracods, including Cypridina noctiluca or Vargula hilgendorfii. See also Kleptoplasty" https://en.wikipedia.org/wiki/Thermodynamic%20limit,"In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles. The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed. In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas. Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important. There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering) motion of visible particles (Brownian motion) electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires) Mathematically an asymptotic analysis is performed when considering the thermodynamic limit. Origin The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volu" https://en.wikipedia.org/wiki/End-to-end%20delay,"End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured. Measurement The ping utility measures the RTT, that is, the time to go and come back to a host. Half the RTT is often used as an approximation of OWD but this assumes that the forward and back paths are the same in terms of congestion, number of hops, or quality of service (QoS). This is not always a good assumption. To avoid such problems, the OWD may be measured directly. Direct OWDs may be measured between two points A and B of an IP network through the use of synchronized clocks; A records a timestamp on the packet and sends it to B, which notes the receiving time and calculates the OWD as their difference. The transmitted packets need to be identified at source and destination in order to avoid packet loss or packet reordering. However, this method suffers several limitations, such as requiring intensive cooperation between both parties, and the accuracy of the measured delay is subject to the synchronization precision. The Minimum-Pairs Protocol is an example by which several cooperating entities, A, B, and C, could measure OWDs between one of them and a fourth less cooperative one (e.g., between B and X). Estimate Transmission between two network nodes may be asymmetric, and the forward and reverse delays are not equal. Half the RTT value is the average of the forward and reverse delays and so may be sometimes used as an approximation to the end-to-end delay. The accuracy of such an estimate depends on the nature of delay distribution in both directions. As delays in both directions become more symmetric, the accuracy increases. The probability mass function (PMF) of absolute error, E, between the smaller of the forward and reverse OWDs and their average " https://en.wikipedia.org/wiki/List%20of%20MOSFET%20applications,"The MOSFET (metal–oxide–semiconductor field-effect transistor) is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET is the basic building block of most modern electronics, and the most frequently manufactured device in history, with an estimated total of 13sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. It is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturized and mass-produced for a wide range of uses. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors. MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives, and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems. Integrated circuits The MOSFET is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. Planar process, develop" https://en.wikipedia.org/wiki/Download,"In computer networks, download means to receive data from a remote system, typically a server such as a web server, an FTP server, an email server, or other similar systems. This contrasts with uploading, where data is sent to a remote server. A download is a file offered for downloading or that has been downloaded, or the process of receiving such a file. Definition Downloading generally transfers entire files for local storage and later use, as contrasted with streaming, where the data is used nearly immediately, while the transmission is still in progress, and which may not be stored long-term. Websites that offer streaming media or media displayed in-browser, such as YouTube, increasingly place restrictions on the ability of users to save these materials to their computers after they have been received. Downloading in computer networks involves retrieving data from a remote system, like a web server, FTP server, or email server, unlike uploading where data is sent to a remote server. A download can refer to a file made available for retrieval or one that has been received, encompassing the entire process of obtaining such a file. Downloading is not the same as data transfer; moving or copying data between two storage devices would be data transfer, but receiving data from the Internet or BBS is downloading. Copyright Downloading media files involves the use of linking and framing Internet material, and relates to copyright law. Streaming and downloading can involve making copies of works that infringe on copyrights or other rights, and organizations running such websites may become vicariously liable for copyright infringement by causing others to do so. Open hosting servers allows people to upload files to a central server, which incurs bandwidth and hard disk space costs due to files generated with each download. Anonymous and open hosting servers make it difficult to hold hosts accountable. Taking legal action against the technologies behind unauthoriz" https://en.wikipedia.org/wiki/Kaczmarz%20method,"The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from projections by Richard Gordon, Robert Bender, and Gabor Herman in 1970, where it is called the Algebraic Reconstruction Technique (ART). ART includes the positivity constraint, making it nonlinear. The Kaczmarz method is applicable to any linear system of equations, but its computational advantage relative to other methods depends on the system being sparse. It has been demonstrated to be superior, in some biomedical imaging applications, to other methods such as the filtered backprojection method. It has many applications ranging from computed tomography (CT) to signal processing. It can be obtained also by applying to the hyperplanes, described by the linear system, the method of successive projections onto convex sets (POCS). Algorithm 1: Kaczmarz algorithm Let be a system of linear equations, let be the number of rows of A, be the th row of complex-valued matrix , and let be arbitrary complex-valued initial approximation to the solution of . For compute: where and denotes complex conjugation of . If the system is consistent, converges to the minimum-norm solution, provided that the iterations start with the zero vector. A more general algorithm can be defined using a relaxation parameter There are versions of the method that converge to a regularized weighted least squares solution when applied to a system of inconsistent equations and, at least as far as initial behavior is concerned, at a lesser cost than other iterative methods, such as the conjugate gradient method. Algorithm 2: Randomized Kaczmarz algorithm In 2009, a randomized version of the Kaczmarz method for overdetermined linear systems was introduced by Thomas Strohmer and Roman Vershynin in which the i-th equation is selected randomly with prob" https://en.wikipedia.org/wiki/List%20of%20dynamical%20systems%20and%20differential%20equations%20topics,"This is a list of dynamical system and differential equation topics, by Wikipedia page. See also list of partial differential equation topics, list of equations. Dynamical systems, in general Deterministic system (mathematics) Linear system Partial differential equation Dynamical systems and chaos theory Chaos theory Chaos argument Butterfly effect 0-1 test for chaos Bifurcation diagram Feigenbaum constant Sharkovskii's theorem Attractor Strange nonchaotic attractor Stability theory Mechanical equilibrium Astable Monostable Bistability Metastability Feedback Negative feedback Positive feedback Homeostasis Damping ratio Dissipative system Spontaneous symmetry breaking Turbulence Perturbation theory Control theory Non-linear control Adaptive control Hierarchical control Intelligent control Optimal control Dynamic programming Robust control Stochastic control System dynamics, system analysis Takens' theorem Exponential dichotomy Liénard's theorem Krylov–Bogolyubov theorem Krylov-Bogoliubov averaging method Abstract dynamical systems Measure-preserving dynamical system Ergodic theory Mixing (mathematics) Almost periodic function Symbolic dynamics Time scale calculus Arithmetic dynamics Sequential dynamical system Graph dynamical system Topological dynamical system Dynamical systems, examples List of chaotic maps Logistic map Lorenz attractor Lorenz-96 Iterated function system Tetration Ackermann function Horseshoe map Hénon map Arnold's cat map Population dynamics Complex dynamics Fatou set Julia set Mandelbrot set Difference equations Recurrence relation Matrix difference equation Rational difference equation Ordinary differential equations: general Examples of differential equations Autonomous system (mathematics) Picard–Lindelöf theorem Peano existence theorem Carathéodory existence theorem Numerical ordinary differential equations Bendixson–Dulac theorem Gradient conjecture Recurrence plot Limit cycle Initial value problem Clairaut's equation Singular sol" https://en.wikipedia.org/wiki/Systems%20development%20life%20cycle,"In systems engineering, information systems and software engineering, the systems development life cycle (SDLC), also referred to as the application development life cycle, is a process for planning, creating, testing, and deploying an information system. The SDLC concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. There are usually six stages in this cycle: requirement analysis, design, development and testing, implementation, documentation, and evaluation. Overview A systems development life cycle is composed of distinct work phases that are used by systems engineers and systems developers to deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high-quality systems that meet or exceed expectations, based on requirements, by delivering systems within scheduled time frames and cost estimates. Computer systems are complex and often link components with varying origins. Various SDLC methodologies have been created, such as waterfall, spiral, agile, rapid prototyping, incremental, and synchronize and stabilize. SDLC methodologies fit within a flexibility spectrum ranging from agile to iterative to sequential. Agile methodologies, such as XP and Scrum, focus on lightweight processes that allow for rapid changes. Iterative methodologies, such as Rational Unified Process and dynamic systems development method, focus on stabilizing project scope and iteratively expanding or improving products. Sequential or big-design-up-front (BDUF) models, such as waterfall, focus on complete and correct planning to guide larger projects and limit risks to successful and predictable results. Anamorphic development is guided by project scope and adaptive iterations. In project management a project can include both a project life cycle (PLC) and an SDLC, during which somewhat different activities occur. According to Taylor (2004" https://en.wikipedia.org/wiki/Radiation%20hardening,"Radiation hardening is the process of making electronic components and circuits resistant to damage or malfunction caused by high levels of ionizing radiation (particle radiation and high-energy electromagnetic radiation), especially for environments in outer space (especially beyond the low Earth orbit), around nuclear reactors and particle accelerators, or during nuclear accidents or nuclear warfare. Most semiconductor electronic components are susceptible to radiation damage, and radiation-hardened (rad-hard) components are based on their non-hardened equivalents, with some design and manufacturing variations that reduce the susceptibility to radiation damage. Due to the extensive development and testing required to produce a radiation-tolerant design of a microelectronic chip, the technology of radiation-hardened chips tends to lag behind the most recent developments. Radiation-hardened products are typically tested to one or more resultant-effects tests, including total ionizing dose (TID), enhanced low dose rate effects (ELDRS), neutron and proton displacement damage, and single event effects (SEEs). Problems caused by radiation Environments with high levels of ionizing radiation create special design challenges. A single charged particle can knock thousands of electrons loose, causing electronic noise and signal spikes. In the case of digital circuits, this can cause results which are inaccurate or unintelligible. This is a particularly serious problem in the design of satellites, spacecraft, future quantum computers, military aircraft, nuclear power stations, and nuclear weapons. In order to ensure the proper operation of such systems, manufacturers of integrated circuits and sensors intended for the military or aerospace markets employ various methods of radiation hardening. The resulting systems are said to be rad(iation)-hardened, rad-hard, or (within context) hardened. Major radiation damage sources Typical sources of exposure of electronics to ioni" https://en.wikipedia.org/wiki/Regressive%20discrete%20Fourier%20series,"In applied mathematics, the regressive discrete Fourier series (RDFS) is a generalization of the discrete Fourier transform where the Fourier series coefficients are computed in a least squares sense and the period is arbitrary, i.e., not necessarily equal to the length of the data. It was first proposed by Arruda (1992a, 1992b). It can be used to smooth data in one or more dimensions and to compute derivatives from the smoothed curve, surface, or hypersurface. Technique One-dimensional regressive discrete Fourier series The one-dimensional RDFS proposed by Arruda (1992a) can be formulated in a very straightforward way. Given a sampled data vector (signal) , one can write the algebraic expression: Typically , but this is not necessary. The above equation can be written in matrix form as The least squares solution of the above linear system of equations can be written as: where is the conjugate transpose of , and the smoothed signal is obtained from: The first derivative of the smoothed signal can be obtained from: Two-dimensional regressive discrete Fourier series (RDFS) The two-dimensional, or bidimensional RDFS proposed by Arruda (1992b) can also be formulated in a straightforward way. Here the equally spaced data case will be treated for the sake of simplicity. The general non-equally-spaced and arbitrary grid cases are given in the reference (Arruda, 1992b). Given a sampled data matrix (bi dimensional signal) one can write the algebraic expression: The above equation can be written in matrix form for a rectangular grid. For the equally spaced sampling case : we have: The least squares solution may be shown to be: and the smoothed bidimensional surface is given by: where is the conjugate, and is the transpose of . Differentiation with respect to can be easily implemented analogously to the one-dimensional case (Arruda, 1992b). Current applications Spatially dense data condensation applications: Arruda, J.R.F. [1993] applied the RDFS to co" https://en.wikipedia.org/wiki/Rensch%27s%20rule,"Rensch's rule is a biological rule on allometrics, concerning the relationship between the extent of sexual size dimorphism and which sex is larger. Across species within a lineage, size dimorphism increases with increasing body size when the male is the larger sex, and decreases with increasing average body size when the female is the larger sex. The rule was proposed by the evolutionary biologist Bernhard Rensch in 1950. After controlling for confounding factors such as evolutionary history, an increase in average body size makes the difference in body size larger if the species has larger males, and smaller if it has larger females. Some studies propose that this is due to sexual bimaturism, which causes male traits to diverge faster and develop for a longer period of time. The correlation between sexual size dimorphism and body size is hypothesized to be a result of an increase in male-male competition in larger species, a result of limited environmental resources, fuelling aggression between males over access to breeding territories and mating partners. Phylogenetic lineages that appear to follow this rule include primates, pinnipeds, and artiodactyls. This rule has rarely been tested on parasites. A 2019 study showed that ectoparasitic philopterid and menoponid lice comply with it, while ricinid lice exhibit a reversed pattern." https://en.wikipedia.org/wiki/Hop%20%28networking%29,"In wired computer networking, including the Internet, a hop occurs when a packet is passed from one network segment to the next. Data packets pass through routers as they travel between source and destination. The hop count refers to the number of network devices through which data passes from source to destination (depending on routing protocol, this may include the source/destination, that is, the first hop is counted as hop 0 or hop 1). Since store and forward and other latencies are incurred through each hop, a large number of hops between source and destination implies lower real-time performance. Hop count In wired networks, the hop count refers to the number of networks or network devices through which data passes between source and destination (depending on routing protocol, this may include the source/destination, that is, the first hop is counted as hop 0 or hop 1). Thus, hop count is a rough measure of distance between two hosts. For a routing protocol using 1-origin hop counts (such as RIP), a hop count of n means that n networks separate the source host from the destination host. Other protocols such as DHCP use the term ""hop"" to refer to the number of times a message has been forwarded. On a layer 3 network such as Internet Protocol (IP), each router along the data path constitutes a hop. By itself, this metric is, however, not useful for determining the optimum network path, as it does not take into consideration the speed, load, reliability, or latency of any particular hop, but merely the total count. Nevertheless, some routing protocols, such as Routing Information Protocol (RIP), use hop count as their sole metric. Each time a router receives a packet, it modifies the packet, decrementing the time to live (TTL). The router discards any packets received with a zero TTL value. This prevents packets from endlessly bouncing around the network in the event of routing errors. Routers are capable of managing hop counts, but other types of network de" https://en.wikipedia.org/wiki/Molecular-weight%20size%20marker,"A molecular-weight size marker, also referred to as a protein ladder, DNA ladder, or RNA ladder, is a set of standards that are used to identify the approximate size of a molecule run on a gel during electrophoresis, using the principle that molecular weight is inversely proportional to migration rate through a gel matrix. Therefore, when used in gel electrophoresis, markers effectively provide a logarithmic scale by which to estimate the size of the other fragments (providing the fragment sizes of the marker are known). Protein, DNA, and RNA markers with pre-determined fragment sizes and concentrations are commercially available. These can be run in either agarose or polyacrylamide gels. The markers are loaded in lanes adjacent to sample lanes before the commencement of the run. DNA markers Development Although the concept of molecular-weight markers has been retained, techniques of development have varied throughout the years. New inventions of molecular-weight markers are distributed in kits specific to the marker's type. An early problem in the development of markers was achieving high resolution throughout the entire length of the marker. Depending on the running conditions of gel electrophoresis, fragments may have been compressed, disrupting clarity. To address this issue, a kit for Southern Blot analysis was developed in 1990, providing the first marker to combine target DNA and probe DNA. This technique took advantage of logarithmic spacing, and could be used to identify target bands ranging over a length of 20,000 nucleotides. Design There are two common methods in which to construct a DNA molecular-weight size marker. One such method employs the technique of partial ligation. DNA ligation is the process by which linear DNA pieces are connected to each other via covalent bonds; more specifically, these bonds are phosphodiester bonds. Here, a 100bp duplex DNA piece is partially ligated. The consequence of this is that dimers of 200bp, trimers of 300bp," https://en.wikipedia.org/wiki/Biogeography%20of%20Deep-Water%20Chemosynthetic%20Ecosystems,"The Biogeography of Deep-Water Chemosynthetic Ecosystems is a field project of the Census of Marine Life programme (CoML). The main aim of ChEss is to determine the biogeography of deep-water chemosynthetic ecosystems at a global scale and to understand the processes driving these ecosystems. ChEss addresses the main questions of CoML on diversity, abundance and distribution of marine species, focusing on deep-water reducing environments such as hydrothermal vents, cold seeps, whale falls, sunken wood and areas of low oxygen that intersect with continental margins and seamounts. Background Deep-sea hydrothermal vents and their associated fauna were first discovered along the Galapagos Rift in the eastern Pacific in 1977. Vents are now known to occur along all active mid ocean ridges and back-arc spreading centres, from fast to ultra-slow spreading ridges. The interest in chemosynthetic environments was strengthened by the discovery of chemosynthetic-based fauna at cold seeps along the base of the Florida Escarpment in 1983. Cold seeps occur along active and passive continental margins. More recently, the study of chemosynthetic fauna has extended to the communities that develop in other reducing habitats such as whale falls, sunken wood and areas of oxygen minima when they intersect with the margin or seamounts. Since the first discovery of hydrothermal vents, more than 600 species have been described from vents and seeps. This is equivalent of 1 new description every 2 weeks(!). As biologists, geochemists, and physicists combine research efforts in these systems, new species will certainly be discovered. Moreover, because of the extreme conditions of the vent and seep habitat, certain species may have specific physiological adaptations with interesting results for the biochemical and medical industry. These globally distributed, ephemeral and insular habitats that support endemic faunas offer natural laboratories for studies on dispersal, isolation and evolutio" https://en.wikipedia.org/wiki/In-target%20probe,"In-target probe, or ITP is a device used in computer hardware and microprocessor design, to control a target microprocessor or similar ASIC at the register level. It generally allows full control of the target device and allows the computer engineer access to individual processor registers, program counter, and instructions within the device. It allows the processor to be single-stepped or for breakpoints to be set. Unlike an in-circuit emulator (ICE), an In-Target Probe uses the target device to execute, rather than substituting for the target device. See also Hardware-assisted virtualization In-circuit emulator Joint Test Action Group External links ITP700 Debug Port Design Guide - Intel Embedded systems Debugging" https://en.wikipedia.org/wiki/Negative%20frequency,"In mathematics, signed frequency (negative and positive frequency) expands upon the concept of frequency, from just an absolute value representing how often some repeating event occurs, to also have a positive or negative sign representing one of two opposing orientations for occurrences of those events. The following examples help illustrate the concept: For a rotating object, the absolute value of its frequency of rotation indicates how many rotations the object completes per unit of time, while the sign could indicate whether it is rotating clockwise or counterclockwise. Mathematically speaking, the vector has a positive frequency of +1 radian per unit of time and rotates counterclockwise around the unit circle, while the vector has a negative frequency of -1 radian per unit of time, which rotates clockwise instead. For a harmonic oscillator such as a pendulum, the absolute value of its frequency indicates how many times it swings back and forth per unit of time, while the sign could indicate in which of the two opposite directions it started moving. For a periodic function represented in a Cartesian coordinate system, the absolute value of its frequency indicates how often in its domain it repeats its values, while changing the sign of its frequency could represent a reflection around its y-axis. Sinusoids Let be a nonnegative angular frequency with units of radians per unit of time and let be a phase in radians. A function has slope When used as the argument of a sinusoid, can represent a negative frequency. Because cosine is an even function, the negative frequency sinusoid is indistinguishable from the positive frequency sinusoid Similarly, because sine is an odd function, the negative frequency sinusoid is indistinguishable from the positive frequency sinusoid or Thus any sinusoid can be represented in terms of positive frequencies only. The sign of the underlying phase slope is ambiguous. Because leads by radians (or cycle) for posi" https://en.wikipedia.org/wiki/Backhouse%27s%20constant,"Backhouse's constant is a mathematical constant named after Nigel Backhouse. Its value is approximately 1.456 074 948. It is defined by using the power series such that the coefficients of successive terms are the prime numbers, and its multiplicative inverse as a formal power series, Then: . This limit was conjectured to exist by Backhouse, and later proven by Philippe Flajolet." https://en.wikipedia.org/wiki/Computer%20network%20programming,"Computer network programming involves writing computer programs that enable processes to communicate with each other across a computer network. Connection-oriented and connectionless communications Very generally, most of communications can be divided into connection-oriented, and connectionless. Whether a communication is connection-oriented or connectionless, is defined by the communication protocol, and not by . Examples of the connection-oriented protocols include and , and examples of connectionless protocols include , ""raw IP"", and . Clients and servers For connection-oriented communications, communication parties usually have different roles. One party is usually waiting for incoming connections; this party is usually referred to as ""server"". Another party is the one which initiates connection; this party is usually referred to as ""client"". For connectionless communications, one party (""server"") is usually waiting for an incoming packet, and another party (""client"") is usually understood as the one which sends an unsolicited packet to ""server"". Popular protocols and APIs Network programming traditionally covers different layers of OSI/ISO model (most of application-level programming belongs to L4 and up). The table below contains some examples of popular protocols belonging to different OSI/ISO layers, and popular APIs for them. See also Software-defined networking Infrastructure as code Site reliability engineering DevOps" https://en.wikipedia.org/wiki/Thermolabile,"Thermolabile refers to a substance which is subject to, decomposition, or change in response to heat. This term is often used describe biochemical substances. For example, many bacterial exotoxins are thermolabile and can be easily inactivated by the application of moderate heat. Enzymes are also thermolabile and lose their activity when the temperature rises. Loss of activity in such toxins and enzymes is likely due to change in the three-dimensional structure of the toxin protein during exposure to heat. In pharmaceutical compounds, heat generated during grinding may lead to degradation of thermolabile compounds. This is of particular use in testing gene function. This is done by intentionally creating mutants which are thermolabile. Growth below the permissive temperature allows normal protein function, while increasing the temperature above the permissive temperature ablates activity, likely by denaturing the protein. Thermolabile enzymes are also studied for their applications in DNA replication techniques, such as PCR, where thermostable enzymes are necessary for proper DNA replication. Enzyme function at higher temperatures may be enhanced with trehalose, which opens up the possibility of using normally thermolabile enzymes in DNA replication. See also Thermostable Thermolabile protecting groups" https://en.wikipedia.org/wiki/Chamfered%20dodecahedron,"In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron. It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-12 truncated rhombic triacontahedron because only the order-12 vertices are truncated. Structure These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons. The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each. It is the Goldberg polyhedron , containing pentagonal and hexagonal faces. It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes. Chemistry This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space. Related polyhedra This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons. The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2). Chamfered truncated icosahedron In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagon" https://en.wikipedia.org/wiki/Eyes%20%28cheese%29,"Eyes are the round holes that are a characteristic feature of Swiss-type cheese (e.g. Emmentaler cheese) and some Dutch-type cheeses. The eyes are bubbles of carbon dioxide gas. The gas is produced by various species of bacteria in the cheese. Swiss cheese In Swiss-type cheeses, the eyes form as a result of the activity of propionic acid bacteria (propionibacteria), notably Propionibacterium freudenreichii subsp. shermanii. These bacteria transform lactic acid into propionic acid and carbon dioxide, according to the formula: 3 Lactate → 2 Propionate + Acetate + CO2 + H2O The CO2 so produced accumulates at weak points in the curd, where it forms the bubbles that become the cheese's eyes. Not all CO2 is so trapped: in an cheese, about 20 L CO2 remain in the eyes, while 60 L remain dissolved in the cheese mass and 40 L are lost from the cheese. Dutch cheese In Dutch-type cheeses, the CO2 that forms the eyes results from the metabolisation of citrate by citrate-positive (""Cit+"") strains of lactococci. Bibliography Polychroniadou, A. (2001). Eyes in cheese: a concise review. Milchwissenschaft 56, 74–77." https://en.wikipedia.org/wiki/Double%20subscript%20notation,"In engineering, double-subscript notation is notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically. Electronic usage IEEE standard 255-1963, ""Letter Symbols for Semiconductor Devices"", defined eleven original quantity symbols expressed as abbreviations. This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this: represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying ""the voltage drop from C to B"", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121. would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying ""the current in the direction going from C to E"". Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning" https://en.wikipedia.org/wiki/Register%20renaming,"In computer architecture, register renaming is a technique that abstracts logical registers from physical registers. Every logical register has a set of physical registers associated with it. When a machine language instruction refers to a particular logical register, the processor transposes this name to one specific physical register on the fly. The physical registers are opaque and cannot be referenced directly but only via the canonical names. This technique is used to eliminate false data dependencies arising from the reuse of registers by successive instructions that do not have any real data dependencies between them. The elimination of these false data dependencies reveals more instruction-level parallelism in an instruction stream, which can be exploited by various and complementary techniques such as superscalar and out-of-order execution for better performance. Problem approach In a register machine, programs are composed of instructions which operate on values. The instructions must name these values in order to distinguish them from one another. A typical instruction might say: “add and and put the result in ”. In this instruction, , and are the names of storage locations. In order to have a compact instruction encoding, most processor instruction sets have a small set of special locations which can be referred to by special names: registers. For example, the x86 instruction set architecture has 8 integer registers, x86-64 has 16, many RISCs have 32, and IA-64 has 128. In smaller processors, the names of these locations correspond directly to elements of a register file. Different instructions may take different amounts of time; for example, a processor may be able to execute hundreds of instructions while a single load from the main memory is in progress. Shorter instructions executed while the load is outstanding will finish first, thus the instructions are finishing out of the original program order. Out-of-order execution has been used in " https://en.wikipedia.org/wiki/Scramble%20competition,"In ecology, scramble competition (or complete symmetric competition or exploitation competition) refers to a situation in which a resource is accessible to all competitors (that is, it is not monopolizable by an individual or group). However, since the particular resource is usually finite, scramble competition may lead to decreased survival rates for all competitors if the resource is used to its carrying capacity. Scramble competition is also defined as ""[a] finite resource [that] is shared equally amongst the competitors so that the quantity of food per individual declines with increasing population density"". A further description of scramble competition is ""competition for a resource that is inadequate for the needs of all, but which is partitioned equally among contestants, so that no competitor obtains the amount it needs and all would die in extreme cases."" Types of intraspecific competition Researchers recognize two main forms of intraspecific competition, where members of a species are all using a shared resource in short supply. These are contest competition and scramble competition. Contest competition Contest competition is a form of competition where there is a winner and a loser and where resources can be attained completely or not at all. Contest competition sets up a situation where ""each successful competitor obtains all resources it requires for survival or reproduction"". Here ""contest"" refers to the fact that physical action plays an active role in securing the resource. Contest competition involves resources that are stable, i.e. food or mates. Contests can be for a ritual objective such as territory or status, and losers may return to the competition another day to try again. Scramble competition In scramble competition resources are limited, which may lead to group member starvation. Contest competition is often the result of aggressive social domains, including hierarchies or social chains. Conversely, scramble competition is what occurs by" https://en.wikipedia.org/wiki/Frans%C3%A9n%E2%80%93Robinson%20constant,"The Fransén–Robinson constant, sometimes denoted F, is the mathematical constant that represents the area between the graph of the reciprocal Gamma function, , and the positive x axis. That is, Other expressions The Fransén–Robinson constant has numerical value , and continued fraction representation [2; 1, 4, 4, 1, 18, 5, 1, 3, 4, 1, 5, 3, 6, ...] . The constant is somewhat close to Euler's number This fact can be explained by approximating the integral by a sum: and this sum is the standard series for e. The difference is or equivalently The Fransén–Robinson constant can also be expressed using the Mittag-Leffler function as the limit It is however unknown whether F can be expressed in closed form in terms of other known constants. Calculation history A fair amount of effort has been made to calculate the numerical value of the Fransén–Robinson constant with high accuracy. The value was computed to 36 decimal places by Herman P. Robinson using 11 point Newton–Cotes quadrature, to 65 digits by A. Fransén using Euler–Maclaurin summation, and to 80 digits by Fransén and S. Wrigge using Taylor series and other methods. William A. Johnson computed 300 digits, and Pascal Sebah was able to compute 1025 digits using Clenshaw–Curtis integration." https://en.wikipedia.org/wiki/Food%20packaging,"Food packaging is a packaging system specifically designed for food and represents one of the most important aspects among the processes involved in the food industry, as it provides protection from chemical, biological and physical alterations. The main goal of food packaging is to provide a practical means of protecting and delivering food goods at a reasonable cost while meeting the needs and expectations of both consumers and industries. Additionally, current trends like sustainability, environmental impact reduction, and shelf-life extension have gradually become among the most important aspects in designing a packaging system. History Packaging of food products has seen a vast transformation in technology usage and application from the stone age to the industrial revolution: 7000 BC: The adoption of pottery and glass which saw industrialization around 1500 BC. 1700s: The first manufacturing production of tinplate was introduced in England (1699) and in France (1720). Afterwards, the Dutch navy start to use such packaging to prolong the preservation of food products. 1804: Nicolas Appert, in response to inquiries into extending the shelf life of food for the French Army, employed glass bottles along with thermal food treatment. Glass has been replaced by metal cans in this application. However, there is still an ongoing debate about who first introduced the use of tinplates as food packaging. 1870: The use of paper board was launched and corrugated materials patented. 1880s: First cereal packaged in a folding box by Quaker Oats. 1890s: The crown cap for glass bottles was patented by William Painter. 1960s: Development of the two-piece drawn and wall-ironed metal cans in the US, along with the ring-pull opener and the Tetra Brik Aseptic carton package. 1970s: The barcode system was introduced in the retail and manufacturing industry. PET plastic blow-mold bottle technology, which is widely used in the beverage industry, was introduced. 1990s: The app" https://en.wikipedia.org/wiki/Food%20sampling,"Food sampling is a process used to check that a food is safe and that it does not contain harmful contaminants, or that it contains only permitted additives at acceptable levels, or that it contains the right levels of key ingredients and its label declarations are correct, or to know the levels of nutrients present. A food sample is carried out by subjecting the product to physical analysis. Analysis may be undertaken by or on behalf of a manufacturer regarding their own product, or for official food law enforcement or control purposes, or for research or public information. To undertake any analysis, unless the whole amount of food to be considered is very small so that the food can be used for testing in its entirety, it is usually necessary for a portion of it to be taken (e.g. a small quantity from a full production batch, or a portion of what is on sale in a shop) – this process is known as food sampling. In most cases with food to be analysed there are two levels of sampling – the first being selection of a portion from the whole, which is then submitted to a laboratory for testing, and the second being the laboratory's taking of the individual amounts necessary for individual tests that may be applied. It is the former that is ‘food sampling’: the latter is analytical laboratory ‘sub-sampling’, often relying upon initial homogenisation of the entire submitted sample. Where it is intended that the results of any analysis to relate to the food as a whole it is crucially important that the sample is representative of that whole – and the results of any analysis can only be meaningful if the sampling is undertaken effectively. This is true whether the ‘whole’ is a manufacturer's entire production batch, or where it is a single item but too large to all be used for the test. Factors relevant in considering the representativeness of a sample include the homogeneity of the food, the relative sizes of the sample to be taken and the whole, the potential" https://en.wikipedia.org/wiki/Ecosystem%20collapse,"An ecosystem, short for ecological system, is defined as a collection of interacting organisms within a biophysical environment. Ecosystems are never static, and are continually subject to stabilizing and destabilizing processes alike. Stabilizing processes allow ecosystems to adequately respond to destabilizing changes, or pertubations, in ecological conditions, or to recover from degradation induced by them: yet, if destabilizing processes become strong enough or fast enough to cross a critical threshold within that ecosystem, often described as an ecological 'tipping point', then an ecosystem collapse (sometimes also termed ecological collapse) occurs. Ecosystem collapse does not mean total disappearance of life from the area, but it does result in the loss of the original ecosystem's defining characteristics, typically including the ecosystem services it may have provided. Collapse of an ecosystem is effectively irreversible more often than not, and even if the reversal is possible, it tends to be slow and difficult. Ecosystems with low resilience may collapse even during a comparatively stable time, which then typically leads to their replacement with a more resilient system in the biosphere. However, even resilient ecosystems may disappear during the times of rapid environmental change, and study of the fossil record was able to identify how certain ecosystems went through a collapse, such as with the Carboniferous rainforest collapse or the collapse of Lake Baikal and Lake Hovsgol ecosystems during the Last Glacial Maximum. Today, the ongoing Holocene extinction is caused primarily by human impact on the environment, and the greatest biodiversity loss so far had been due to habitat degradation and fragmentation, which eventually destroys entire ecosystems if left unchecked. There have been multiple notable examples of such an ecosystem collapse in the recent past, such as the collapse of the Atlantic northwest cod fishery. More are likely to occur without " https://en.wikipedia.org/wiki/Vectored%20interrupt,"In computer science, a vectored interrupt is a processing technique in which the interrupting device directs the processor to the appropriate interrupt service routine. This is in contrast to a polled interrupt system, in which a single interrupt service routine must determine the source of the interrupt by checking all potential interrupt sources, a slow and relatively laborious process. Implementation Vectored interrupts are achieved by assigning each interrupting device a unique code, typically four to eight bits in length. When a device interrupts, it sends its unique code over the data bus to the processor, telling the processor which interrupt service routine to execute." https://en.wikipedia.org/wiki/Repeater%20insertion,"Repeater insertion is a technique used to reduce time delays associated with long wire lines in integrated circuits. This technique involves cutting the long wire into one or more shorter wires, and then inserting a repeater between each pair of newly created short wires. The time it takes for a signal to travel from one end of a wire to the other end is known as wire-line delay or just delay. In an integrated circuit, this delay is characterized by RC, the resistance of the wire (R) multiplied by the wire's capacitance (C). Thus, if the wire's resistance is 100 ohms and its capacitance is 0.01 microfarad (μF), the wire's delay is one microsecond (µs). To first order, the resistance of a wire on an integrated circuit is directly proportional, or linear, according to the wire's length. If a 1 mm length of the wire has 100 ohms resistance, then a 2 mm length will have 200 ohms resistance. For the purposes of our highly simplified discussion, the capacitance of a wire also increases linearly along its length. If a 1 mm length of the wire has 0.01 µF capacitance, a 2 mm length of the wire will have 0.02 µF, a 3 mm wire will have 0.03 µF, and so o Thus, the time delay through a wire increases with the square of the wire's length. This is true, to first order, for any wire whose cross-section remains constant along the length of the wire. wire resistance capacitance time delay length R C t 1 mm 100 ohm 0.01 µF 1 µs 2 mm 200 ohm 0.02 µF 4 µs 3 mm 300 ohm 0.03 µF 9 µs The interesting consequence of this behavior is that, while a single 2 mm length of wire has a delay of 4 µs. Two separate 1 mm wires only have a delay of 1 µs each and cover the same distance in half the time. By cutting the wire in half, one can double its speed. To make this science trick work properly, an active circuit must be placed between the two separate wires so as to move the signal from one to " https://en.wikipedia.org/wiki/Current%20conveyor,"A current conveyor is an abstraction for a three-terminal analogue electronic device. It is a form of electronic amplifier with unity gain. There are three versions of generations of the idealised device, CCI, CCII and CCIII. When configured with other circuit elements, real current conveyors can perform many analogue signal processing functions, in a similar manner to the way op-amps and the ideal concept of the op-amp are used. History When Sedra and Smith first introduced the current conveyor in 1968, it was not clear what the benefits of the concept would be. The idea of the op-amp had been well known since the 1940s, and integrated circuit manufacturers were better able to capitalise on this widespread knowledge within the electronics industry. Monolithic current conveyor implementations were not introduced, and the op-amp became widely implemented. Since the early 2000s, implementations of the current conveyor concept, especially within larger VLSI projects such as mobile phones, have proved worthwhile. Advantages Current conveyors can provide better gain-bandwidth products than comparable op-amps, under both small and large signal conditions. In instrumentation amplifiers, their gain does not depend on matching pairs of external components, only on the absolute value of a single circuit element. First generation (CCI) The CCI is a three-terminal device with the terminals designated X, Y, and Z. The potential at X equals whatever voltage is applied to Y. Whatever current flows into Y also flows into X, and is mirrored at Z with a high output impedance, as a variable constant current source. In sub-type CCI+, current into Y produces current into Z; in a CCI-, current into Y results in an equivalent current flowing out of Z. Second generation (CCII) In a more versatile later design, no current flows through terminal Y. The ideal CCII can be seen as an ideal transistor with perfected characteristics. No current flows into the gate or base which is represen" https://en.wikipedia.org/wiki/Outline%20of%20arithmetic,"Arithmetic is an elementary branch of mathematics that is widely used for tasks ranging from simple day-to-day counting to advanced science and business calculations. Essence of arithmetic Elementary arithmetic Decimal arithmetic Decimal point Numeral Place value Face value History of arithmetic Arithmetic operations and related concepts Order of Operations Addition Summation – Answer after adding a sequence of numbers Additive Inverse Subtraction – Taking away numbers Multiplication – Repeated addition Multiple – Product of Multiplication Least Common Multiple Multiplicative Inverse Division – Repeated Subtraction Modulo – The remainder of division Quotient – Result of Division Quotition and Partition – How many parts are there, and what is the size of each part Fraction – A number that isn't whole, often shown as a divsion equation Decimal Fraction – Representation of a Fraction in the form of a number Proper Fraction – Fraction with a Numerator that is less than the Denominator Improper Fraction – Fractions with a Numerator that is any number Ratio – Showing how much one number can go into another Least Common Denominator – Least Common Multiple of 2 or more fractions' denominators Factoring – Breaking a number down into its products Fundamental theorem of arithmetic Prime number – Number divisable by only 1 or itself Prime number theorem Distribution of primes Composite number – Number made of 2 smaller integers Factor – A number that can be divided from it's original number to get a whole number Greatest Common Factor – Greatest Factor that is common between 2 numbers Euclid's algorithm for finding greatest common divisors Exponentiation (power) – Repreated Multiplication Square root – Reversal of a power of 2 (exponent of 1/2) Cube root – Reversal of a power of 3 (exponent of 1/3) Properties of Operations Associative property Distributive property Commutative property Factorial – Multiplication of numbers from the current number to 0 Types of numbers Re" https://en.wikipedia.org/wiki/Critical%20area%20%28computing%29,"In integrated circuit design, a critical area is a section of a circuit design wherein a particle of a particular size can cause a failure. It measures the sensitivity of the circuit to a reduction in yield. The critical area on a single layer integrated circuit design is given by: where is the area in which a defect of radius will cause a failure, and is the density function of said defect." https://en.wikipedia.org/wiki/Extract,"An extract (essence) is a substance made by extracting a part of a raw material, often by using a solvent such as ethanol, oil or water. Extracts may be sold as tinctures, absolutes or in powder form. The aromatic principles of many spices, nuts, herbs, fruits, etc., and some flowers, are marketed as extracts, among the best known of true extracts being almond, cinnamon, cloves, ginger, lemon, nutmeg, orange, peppermint, pistachio, rose, spearmint, vanilla, violet, rum, and wintergreen. Extraction techniques Most natural essences are obtained by extracting the essential oil from the feedstock, such as blossoms, fruit, and roots, or from intact plants through multiple techniques and methods: Expression (juicing, pressing) involves physical extraction material from feedstock, used when the oil is plentiful and easily obtained from materials such as citrus peels, olives, and grapes. Absorption (steeping, decoction). Extraction is done by soaking material in a solvent, as used for vanilla beans or tea leaves. Maceration, as used to soften and degrade material without heat, normally using oils, such as for peppermint extract and wine making. Distillation or separation process, creating a higher concentration of the extract by heating material to a specific boiling point, then collecting this and condensing the extract, leaving the unwanted material behind, as used for lavender extract. The distinctive flavors of nearly all fruits are desirable adjuncts to many food preparations, but only a few are practical sources of sufficiently concentrated flavor extract, such as from lemons, oranges, and vanilla beans. Artificial extracts The majority of concentrated fruit flavors, such as banana, cherry, peach, pineapple, raspberry, and strawberry, are produced by combining a variety of esters with special oils. Suitable coloring is generally obtained by the use of dyes. Among the esters most generally employed are ethyl acetate and ethyl butyrate. The chief factors " https://en.wikipedia.org/wiki/Butterfly%20count,"Butterfly counts are often carried out in North America and Europe to estimate the populations of butterflies in a specific geographical area. The counts are conducted by interested, mostly non-professional, residents of the area who maintain an interest in determining the numbers and species of butterflies in their locale. A butterfly count usually occurs at a specific time during the year and is sometimes coordinated to occur with other counts which may include a park, county, entire state or country. The results of the counts are usually shared with other interested parties including professional lepidopterists and researchers. The data gathered during a count can indicate population changes and health within a species. Sponsors Professionals, universities, clubs, elementary and secondary schools, other educational providers, nature preserves, parks, and amateur organizations can organize a count. The participants often receive training to help them identify the butterfly species. The North American Butterfly Association organized over 400 counts in 2014. Types of butterfly counts There are several methods for counting butterflies currently in use, with the notable division being between restricted and open searches. Most counts are designed to count all butterflies observed in a locality. The purpose of a count is to estimate butterfly populations in a larger area from a smaller sample. Counts may be targeted at single species and, in some cases, butterflies are observed and counted as they move from one area to another. A heavily researched example of butterfly migration is the annual migration of monarch butterflies in North America. Some programs will tag butterflies to trace their migration routes, but these are migratory programs and not butterfly counts. Butterfly counts are sometimes done where there is a concentration (a roost) of a species of butterflies in an area. One example of this is the winter count of western monarch butterflies as the" https://en.wikipedia.org/wiki/List%20of%20calculus%20topics,"This is a list of calculus topics. Limits Limit (mathematics) Limit of a function One-sided limit Limit of a sequence Indeterminate form Orders of approximation (ε, δ)-definition of limit Continuous function Differential calculus Derivative Notation Newton's notation for differentiation Leibniz's notation for differentiation Simplest rules Derivative of a constant Sum rule in differentiation Constant factor rule in differentiation Linearity of differentiation Power rule Chain rule Local linearization Product rule Quotient rule Inverse functions and differentiation Implicit differentiation Stationary point Maxima and minima First derivative test Second derivative test Extreme value theorem Differential equation Differential operator Newton's method Taylor's theorem L'Hôpital's rule General Leibniz rule Mean value theorem Logarithmic derivative Differential (calculus) Related rates Regiomontanus' angle maximization problem Rolle's theorem Integral calculus Antiderivative/Indefinite integral Simplest rules Sum rule in integration Constant factor rule in integration Linearity of integration Arbitrary constant of integration Cavalieri's quadrature formula Fundamental theorem of calculus Integration by parts Inverse chain rule method Integration by substitution Tangent half-angle substitution Differentiation under the integral sign Trigonometric substitution Partial fractions in integration Quadratic integral Proof that 22/7 exceeds π Trapezium rule Integral of the secant function Integral of secant cubed Arclength Solid of revolution Shell integration Special functions and numbers Natural logarithm e (mathematical constant) Exponential function Hyperbolic angle Hyperbolic function Stirling's approximation Bernoulli numbers Absolute numerical See also list of numerical analysis topics Rectangle method Trapezoidal rule Simpson's rule Newton–Cotes formulas Gaussian quadrature Lists and tables" https://en.wikipedia.org/wiki/Systemness,"Systemness is the state, quality, or condition of a complex system, that is, of a set of interconnected elements that behave as, or appear to be, a whole, exhibiting behavior distinct from the behavior of the parts. The term is new and has been applied to large social phenomena and organizations (healthcare and higher education) by advocates of higher degrees of system-like, coherent behavior for delivering value to stakeholders. In sociology, Montreal-based Polish academic Szymon Chodak (1973) used ""societal systemness"" in English to describe the empirical reality that inspired Emile Durkheim. The healthcare-related usage of the term was as early as 1986 in a Dutch psychiatric research paper. It has recently been adapted to describe the sustainability efforts of healthcare institutions amidst budget cuts stemming from the 2008–2012 global recession. The higher educational use appears to have featured in professional discussions between sociologist Neil Smelser and University of California Chancellor and President Clark Kerr in the 1950s or 60s; in the foreword to Kerr's 2001 memoir, Smelser uses the term in inverted commas in recalling such discussions. The term's overt operationalization, however, was instituted by The State University of New York's (SUNY) Chancellor Nancy L. Zimpher in the State of the University Address on January 9, 2012. Zimpher noted systemness as ""the coordination of multiple components that, when working together, create a network of activity that is more powerful than any action of individual parts on their own."" The concept was later explored in the volume, Higher Education Systems 3.0, edited by Jason E. Lane and D. Bruce Johnston. Use in higher education The term ""systemness"" has received widespread adoption in discussions within and among the leaders of multi-campus university systems to discuss the evolution of multi-campus collaboration and coordination in a range of different programmatic areas. The term was first coined b" https://en.wikipedia.org/wiki/Hilbert%20spectroscopy,Hilbert Spectroscopy uses Hilbert transforms to analyze broad spectrum signals from gigahertz to terahertz frequency radio. One suggested use is to quickly analyze liquids inside airport passenger luggage. https://en.wikipedia.org/wiki/Tarjan%27s%20algorithm,"Tarjan's algorithm may refer to one of several algorithms attributed to Robert Tarjan, including: Tarjan's strongly connected components algorithm Tarjan's off-line lowest common ancestors algorithm Tarjan's algorithm for finding bridges in an undirected graph Tarjan's algorithm for finding simple circuits in a directed graph See also List of algorithms" https://en.wikipedia.org/wiki/WIP%20message,"WIP message is a work-in-progress message sent from a computer client to a computer server. It is used to update a server with the progress of an item during a manufacturing process. The only known use is in the automotive wiring manufacturing process, but the message structure is generic enough to be used in any manufacturing process. History The WIP Message Protocol was originally developed to overcome the need to allow computers running disparate operating system to communicate with one another. The first implementation was on the Acorn computer running the RISC OS swiftly followed by a PC implementation. Communication methodology Each computer may act as a server, a client, or both. In the server configuration a listening socket is opened on a specific port (default port is 99) and the server waits for connection attempts from its clients. The client connects by opening a socket and sending data to the server in the format [Header][Data]. The header contains information about the message such as the message length, message number which can be anything from 1 to 4,294,967,295 and the part unique identifier or serial number which is limited to 10 digits (9,999,999,999 max). The serial number consists of the year 4 digits, the day of the year (0-366) 3 digits and a 3 digit sequential number. The server will action the message (each message number has a specific meaning to the particular process) and respond with a return code. The return code is commonly used to designate whether the process is allowed to proceed or not. The server will usually be written in such way that the manufacturing process flow is mapped and the Server will therefore not allow manufacturing to progress to the next stage if the previous stage is incomplete or failed for some reason. Message format Two formats of message are used. Loosely termed a 'short' and a 'long' message format, a short message contains specific information along with 18 bytes that can be used for custom information," https://en.wikipedia.org/wiki/NO%20CARRIER,"NO CARRIER (capitalized) is a text message transmitted from a modem to its attached device (typically a computer), indicating the modem is not (or no longer) connected to a remote system. NO CARRIER is a response message that is defined in the Hayes command set. Due to the popularity of Hayes modems during the heyday of dial-up connectivity, most other modem manufacturers supported the Hayes command set. For this reason, the NO CARRIER message was ubiquitously understood to mean that one was no longer connected to a remote system. Carrier tone A carrier tone is an audio carrier signal used by two modems to suppress echo cancellation and establish a baseline frequency for communication. When the answering modem detects a ringtone on the phone line, it picks up that line and starts transmitting a carrier tone. If it does not receive data from the calling modem within a set amount of time, it disconnects the line. The calling modem waits for the tone after it dials the phone line before it initiates data transmission. If it does not receive a carrier tone within a set amount of time, it will disconnect the phone line and issues the NO CARRIER message. The actual data is transmitted from the answering modem to the calling modem via modulation of the carrier. Practical meaning The NO CARRIER message is issued by a modem for any of the following reasons: A dial (ATD) or answer (ATA) command did not result in a successful connection to another modem, and the reason wasn't that the line was BUSY (a separately defined message). A dial or answer command was aborted while in progress. The abort can be triggered by the computer receiving a keypress to abort or the computer dropping the Data Terminal Ready (DTR) signal to hang up. A previously established data connection has ended (either at the attached computer's command, or as a result of being disconnected from the remote end), and the modem has now gone from the data mode to the command mode. Current use As modems " https://en.wikipedia.org/wiki/Voigt%20notation,"In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application. For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector . As another example: The stress tensor (in matrix notation) is given as In Voigt notation it is simplified to a 6-dimensional vector: The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as Its representation in Voigt notation is where , , and are engineering shear strains. The benefit of using different representations for stress and strain is that the scalar invariance is preserved. Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix. Mnemonic rule A simple mnemonic rule for memorizing Voigt notation is as follows: Write down the second order tensor in matrix form (in the example, the stress tensor) Strike out the diagonal Continue on the third column Go back to the first element along the first row. Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue). Mandel notation For a symmetric tensor of second rank only six components are distinct, the three on the diagonal and the others being off-diagonal. Thus it can be expressed, in Mandel notation, as the vector The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors, for example: A symmetric tensor o" https://en.wikipedia.org/wiki/Dependent%20component%20analysis,"Dependent component analysis (DCA) is a blind signal separation (BSS) method and an extension of Independent component analysis (ICA). ICA is the separating of mixed signals to individual signals without knowing anything about source signals. DCA is used to separate mixed signals into individual sets of signals that are dependent on signals within their own set, without knowing anything about the original signals. DCA can be ICA if all sets of signals only contain a single signal within their own set. Mathematical representation For simplicity, assume all individual sets of signals are the same size, k, and total N sets. Building off the basic equations of BSS (seen below) instead of independent source signals, one has independent sets of signals, s(t) = ({s1(t),...,sk(t)},...,{skN-k+1(t)...,skN(t)})T, which are mixed by coefficients A=[aij]εRmxkN that produce a set of mixed signals, x(t)=(x1(t),...,xm(t))T. The signals can be multidimensional. The following equation BSS separates the set of mixed signals, x(t), by finding and using coefficients, B=[Bij]εRkNxm, to separate and get the set of approximation of the original signals, y(t)=({y1(t),...,yk(t)},...,{ykN-k+1(t)...,ykN(t)})T. Methods Sub-Band Decomposition ICA (SDICA) is based on the fact that wideband source signals are dependent, but that other subbands are independent. It uses an adaptive filter by choosing subbands using a minimum of mutual information (MI) to separate mixed signals. After finding subband signals, ICA can be used to reconstruct, based on subband signals, by using ICA. Below is a formula to find MI based on entropy, where H is entropy." https://en.wikipedia.org/wiki/List%20of%20formulas%20in%20Riemannian%20geometry,"This is a list of formulas encountered in Riemannian geometry. Einstein notation is used throughout this article. This article uses the ""analyst's"" sign convention for Laplacians, except when noted otherwise. Christoffel symbols, covariant derivative In a smooth coordinate chart, the Christoffel symbols of the first kind are given by and the Christoffel symbols of the second kind by Here is the inverse matrix to the metric tensor . In other words, and thus is the dimension of the manifold. Christoffel symbols satisfy the symmetry relations or, respectively, , the second of which is equivalent to the torsion-freeness of the Levi-Civita connection. The contracting relations on the Christoffel symbols are given by and where |g| is the absolute value of the determinant of the metric tensor . These are useful when dealing with divergences and Laplacians (see below). The covariant derivative of a vector field with components is given by: and similarly the covariant derivative of a -tensor field with components is given by: For a -tensor field with components this becomes and likewise for tensors with more indices. The covariant derivative of a function (scalar) is just its usual differential: Because the Levi-Civita connection is metric-compatible, the covariant derivatives of metrics vanish, as well as the covariant derivatives of the metric's determinant (and volume element) The geodesic starting at the origin with initial speed has Taylor expansion in the chart: Curvature tensors Definitions (3,1) Riemann curvature tensor (3,1) Riemann curvature tensor Ricci curvature Scalar curvature Traceless Ricci tensor (4,0) Riemann curvature tensor (4,0) Weyl tensor Einstein tensor Identities Basic symmetries The Weyl tensor has the same basic symmetries as the Riemann tensor, but its 'analogue' of the Ricci tensor is zero: The Ricci tensor, the Einstein tensor, and the traceless Ricci tensor are symmetric 2-tensors: First Bianch" https://en.wikipedia.org/wiki/Daisy%20chain%20%28electrical%20engineering%29,"In electrical and electronic engineering, a daisy chain is a wiring scheme in which multiple devices are wired together in sequence or in a ring, similar to a garland of daisy flowers. Daisy chains may be used for power, analog signals, digital data, or a combination thereof. The term daisy chain may refer either to large scale devices connected in series, such as a series of power strips plugged into each other to form a single long line of strips, or to the wiring patterns embedded inside of devices. Other examples of devices which can be used to form daisy chains are those based on Universal Serial Bus (USB), FireWire, Thunderbolt and Ethernet cables. Signal transmission For analog signals, connections usually consist of a simple electrical bus and, especially in the case of a chain of many devices, may require the use of one or more repeaters or amplifiers within the chain to counteract attenuation (the natural loss of energy in such a system). Digital signals between devices may also travel on a simple electrical bus, in which case a bus terminator may be needed on the last device in the chain. However, unlike analog signals, because digital signals are discrete, they may also be electrically regenerated, but not modified, by any device in the chain. Types Computer hardware Some hardware can be attached to a computing system in a daisy chain configuration by connecting each component to another similar component, rather than directly to the computing system that uses the component. Only the last component in the chain directly connects to the computing system. For example, chaining multiple components that each have a UART port to each other. The components must also behave cooperatively. e.g., only one seizes the communications bus at a time. SCSI is an example of a digital system that is electrically a bus, in the case of external devices, is physically wired as a daisy chain. Since the network is electrically a bus, it must be terminated and this m" https://en.wikipedia.org/wiki/Sides%20of%20an%20equation,"In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric. More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly. Example The expression on the right side of the ""="" sign is the right side of the equation and the expression on the left of the ""="" is the left side of the equation. For example, in is the left-hand side (LHS) and is the right-hand side (RHS). Homogeneous and inhomogeneous equations In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by Lf = g, with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution. For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles. Syntax More abstractly, when using infix notation T * U the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though. See also Equals sign" https://en.wikipedia.org/wiki/Up%20to,"Two mathematical objects and are called ""equal up to an equivalence relation "" if and are related by , that is, if holds, that is, if the equivalence classes of and with respect to are equal. This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count. For example, "" is unique up to "" means that all objects under consideration are in the same equivalence class with respect to the relation . Moreover, the equivalence relation is often designated rather implicitly by a generating condition or transformation. For example, the statement ""an integer's prime factorization is unique up to ordering"" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation that relates two lists if one can be obtained by reordering (permuting) the other. As another example, the statement ""the solution to an indefinite integral is , up to addition of a constant"" tacitly employs the equivalence relation between functions, defined by if the difference is a constant function, and means that the solution and the function are equal up to this . In the picture, ""there are 4 partitions up to rotation"" means that the set has 4 equivalence classes with respect to defined by if can be obtained from by rotation; one representative from each class is shown in the bottom left picture part. Equivalence relations are often used to disregard possible differences of objects, so ""up to "" can be understood informally as ""ignoring the same subtleties as ignores"". In the factorization example, ""up to ordering"" means ""ignoring the particular ordering"". Further examples include ""up to isomorphism"", ""up to permutations"", and ""up to rotations"", which are described in the Examples section. In informal contexts, mathematicians often use the word modulo (or simply mod) for similar purposes, as in ""modulo isomorphism"". Examples Tetris Consider the seven Tetris pieces" https://en.wikipedia.org/wiki/Median%20filter,"The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing. Algorithm description The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the ""window"", which slides, entry by entry, over the entire signal. For one-dimensional signals, the most obvious window is just the first few preceding and following entries, whereas for two-dimensional (or higher-dimensional) data the window must include all entries within a given radius or ellipsoidal region (i.e. the median filter is not a separable filter). Worked one-dimensional example To demonstrate, using a window size of three with one entry immediately preceding and following each entry, a median filter will be applied to the following simple one-dimensional signal: x = (2, 3, 80, 6, 2, 3). So, the median filtered output signal y will be: y1 = med(2, 3, 80) = 3, (already 2, 3, and 80 are in the increasing order so no need to arrange them) y2 = med(3, 80, 6) = med(3, 6, 80) = 6, (3, 80, and 6 are rearranged to find the median) y3 = med(80, 6, 2) = med(2, 6, 80) = 6, y4 = med(6, 2, 3) = med(2, 3, 6) = 3, i.e. y = (3, 6, 6, 3). Boundary issues When implementing a median filter, the boundaries of the signal must be handled with special care, as there are not enough entries to fill an entire window. There are several schemes that have different properties that might be preferred in particular circumstances: When calculating the median of a value near the boundary, missing values are f" https://en.wikipedia.org/wiki/Eocyte%20hypothesis,"The eocyte hypothesis in evolutionary biology proposes that the eukaryotes originated from a group of prokaryotes called eocytes (later classified as Thermoproteota, a group of archaea). After his team at the University of California, Los Angeles discovered eocytes in 1984, James A. Lake formulated the hypothesis as ""eocyte tree"" that proposed eukaryotes as part of archaea. Lake hypothesised the tree of life as having only two primary branches: Parkaryoates that include Bacteria and Archaea, and karyotes that comprise Eukaryotes and eocytes. Parts of this early hypothesis were revived in a newer two-domain system of biological classification which named the primary domains as Archaea and Bacteria. Lake's hypothesis was based on an analysis of the structural components of ribosomes. It was largely ignored, being overshadowed by the three-domain system which relied on more precise genetic analysis. In 1990, Carl Woese and his colleagues proposed that cellular life consists of three domains – Eucarya, Bacteria, and Archaea – based on the ribosomal RNA sequences. The three-domain concept was widely accepted in genetics, and became the presumptive classification system for high-level taxonomy, and was promulgated in many textbooks. Resurgence of archaea research after the 2000s, using advanced genetic techniques, and later discoveries of new groups of archaea revived the eocyte hypothesis; consequently, the two-domain system has found wider acceptance. Description In 1984, James A. Lake, Michael W. Clark, Eric Henderson, and Melanie Oakes of the University of California, Los Angeles described a new group of prokaryotic organisms designated as ""a group of sulfur-dependent bacteria."" Based on the structure and composition of their ribosomal subunits, they found that these organisms were different from other prokaryotes, bacteria and archaea, known at the time. They named them eocytes (for ""dawn cells"") and proposed a new biological kingdom Eocyta. According to this disc" https://en.wikipedia.org/wiki/Adjacent%20channel%20power%20ratio,"Adjacent Channel Power Ratio (ACPR) is ratio between the total power of adjacent channel (intermodulation signal) to the main channel's power (useful signal). Ratio The ratio between the total power adjacent channel (intermodulation signal) to the main channel's power (useful signal). There are two ways of measuring ACPR. The first way is by finding 10*log of the ratio of the total output power to the power in adjacent channel. The second (and much more popular method) is to find the ratio of the output power in a smaller bandwidth around the center of carrier to the power in the adjacent channel. The smaller bandwidth is equal to the bandwidth of the adjacent channel signal. Second way is more popular, because it can be measured easily. ACPR is desired to be as low as possible. A high ACPR indicates that significant spectral spreading has occurred. See also Spectral leakage Spread spectrum" https://en.wikipedia.org/wiki/Open%20architecture,"Open architecture is a type of computer architecture or software architecture intended to make adding, upgrading, and swapping components with other computers easy. For example, the IBM PC, Amiga 500 and Apple IIe have an open architecture supporting plug-in cards, whereas the Apple IIc computer has a closed architecture. Open architecture systems may use a standardized system bus such as S-100, PCI or ISA or they may incorporate a proprietary bus standard such as that used on the Apple II, with up to a dozen slots that allow multiple hardware manufacturers to produce add-ons, and for the user to freely install them. By contrast, closed architectures, if they are expandable at all, have one or two ""expansion ports"" using a proprietary connector design that may require a license fee from the manufacturer, or enhancements may only be installable by technicians with specialized tools or training. Computer platforms may include systems with both open and closed architectures. The Mac mini and Compact Macintosh are closed; the Macintosh II and Power Mac G5 are open. Most desktop PCs are open architecture. Similarly, an open software architecture is one in which additional software modules can be added to the basic framework provided by the architecture. Open APIs (Application Programming Interfaces) to major software products are the way in which the basic functionality of such products can be modified or extended. The Google APIs are examples. A second type of open software architecture consists of the messages that can flow between computer systems. These messages have a standard structure that can be modified or extended per agreements between the computer systems. An example is IBM's Distributed Data Management Architecture. Open architecture allows potential users to see inside all or parts of the architecture without any proprietary constraints. Typically, an open architecture publishes all or parts of its architecture that the developer or integrator wants to " https://en.wikipedia.org/wiki/Five-bar%20linkage,"In kinematics, a five-bar linkage is a mechanism with two degrees of freedom that is constructed from five links that are connected together in a closed chain. All links are connected to each other by five joints in series forming a loop. One of the links is the ground or base. This configuration is also called a pantograph, however, it is not to be confused with the parallelogram-copying linkage pantograph. The linkage can be a one-degree-of-freedom mechanism if two gears are attached to two links and are meshed together, forming a geared five-bar mechanism. Robotic configuration When controlled motors actuate the linkage, the whole system (a mechanism and its actuators) becomes a robot. This is usually done by placing two servomotors (to control the two degrees of freedom) at the joints A and B, controlling the angle of the links L2 and L5. L1 is the grounded link. In this configuration, the controlled endpoint or end-effector is the point D, where the objective is to control its x and y coordinates in the plane in which the linkage resides. The angles theta 1 and theta 2 can be calculated as a function of the x,y coordinates of point D using trigonometric functions. This robotic configuration is a parallel manipulator. It is a parallel configuration robot as it is composed of two controlled serial manipulators connected to the endpoint. Unlike a serial manipulator, this configuration has the advantage of having both motors grounded at the base link. As the motor can be quite massive, this significantly decreases the total moment of inertia of the linkage and improves backdrivability for haptic feedback applications. On the other hand, workspace reached by the endpoint is usually significantly smaller than that of a serial manipulator. Kinematics and dynamics Both the forward and inverse kinematics of this robotic configuration can be found in closed-form equations through geometric relationships. Different methods of finding both have been done by Campion and" https://en.wikipedia.org/wiki/MISRA%20C,"MISRA C is a set of software development guidelines for the C programming language developed by The MISRA Consortium. Its aims are to facilitate code safety, security, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C / C90 / C99. There is also a set of guidelines for MISRA C++ not covered by this article. History Draft: 1997 First edition: 1998 (rules, required/advisory) Second edition: 2004 (rules, required/advisory) Third edition: 2012 (directives; rules, Decidable/Undecidable) MISRA compliance: 2016, updated 2020 For the first two editions of MISRA-C (1998 and 2004) all Guidelines were considered as Rules. With the publication of MISRA C:2012 a new category of Guideline was introduced - the Directive whose compliance is more open to interpretation, or relates to process or procedural matters. Adoption Although originally specifically targeted at the automotive industry, MISRA C has evolved as a widely accepted model for best practices by leading developers in sectors including automotive, aerospace, telecom, medical devices, defense, railway, and others. For example: The Joint Strike Fighter project C++ Coding Standards are based on MISRA-C:1998. The NASA Jet Propulsion Laboratory C Coding Standards are based on MISRA-C:2004. ISO 26262 Functional Safety - Road Vehicles cites MISRA C as being an appropriate sub-set of the C language: ISO 26262-6:2011 Part 6: Product development at the software level cites MISRA-C:2004 and MISRA AC AGC. ISO 26262-6:2018 Part 6: Product development at the software level cites MISRA C:2012. The AUTOSAR General Software Specification (SRS_BSW_00007) likewise cites MISRA C: The AUTOSAR 4.2 General Software Specification requires that If the BSW Module implementation is written in C language, then it shall conform to the MISRA C:2004 Standard. The AUTOSAR 4.3 General Software Specification requires that If the BSW Module implementation is written in C la" https://en.wikipedia.org/wiki/NCR%205380,"The NCR 5380 is an early SCSI controller chip developed by NCR Microelectronics. It was popular due to its simplicity and low cost. The 5380 was used in the Macintosh Plus and in numerous SCSI cards for personal computers, including the Amiga and Atari TT. The 5380 was second sourced by several chip makers, including AMD and Zilog. The 5380 was designed by engineers at the NCR plant then located in Wichita, Kansas, and initially fabricated by NCR Microelectronics in Colorado Springs, Colorado. It was the first single-chip implementation of the SCSI-1 protocol. The NCR 5380 also made a significant appearance in Digital Equipment Corporation's VAX computers, where it was featured on various Q-Bus modules and as an integrated SCSI controller in numerous MicroVAX, VAXstation and VAXserver computers. Many UMAX SCSI optical scanners also contain the 53C80 chip interfaced to an Intel 8031-series microcontroller. Single-chip SCSI controller NCR 53c400 used SCSI 5380 core. See also NCR 53C9x" https://en.wikipedia.org/wiki/Pipeline%20video%20inspection,"Pipeline video inspection is a form of telepresence used to visually inspect the interiors of pipelines, plumbing systems, and storm drains. A common application is for a plumber to determine the condition of small diameter sewer lines and household connection drain pipes. Older sewer lines of small diameter, typically , are made by the union of a number of short sections. The pipe segments may be made of cast iron, with to sections, but are more often made of vitrified clay pipe (VCP), a ceramic material, in , & sections. Each iron or clay segment will have an enlargement (a ""bell"") on one end to receive the end of the adjacent segment. Roots from trees and vegetation may work into the joins between segments and can be forceful enough to break open a larger opening in terra cotta or corroded cast iron. Eventually a root ball will form that will impede the flow and this may cleaned out by a cutter mechanism or plumber's snake and subsequently inhibited by use of a chemical foam - a rooticide. With modern video equipment, the interior of the pipe may be inspected - this is a form of non-destructive testing. A small diameter collector pipe will typically have a cleanout access at the far end and will be several hundred feet long, terminating at a manhole. Additional collector pipes may discharge at this manhole and a pipe (perhaps of larger diameter) will carry the effluent to the next manhole, and so forth to a pump station or treatment plant. Without regular inspection of public sewers, a significant amount of waste may accumulate unnoticed until the system fails. In order to prevent resulting catastrophic events such as pipe bursts and raw sewage flooding onto city streets, municipalities usually conduct pipeline video inspections as a precautionary measure. Inspection equipment Service truck The service truck contains a power supply in the form of a small generator, a small air-conditioned compartment containing video monitoring and recording equipment, " https://en.wikipedia.org/wiki/Potential%20gradient,"In physics, chemistry and biology, a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux. Definition One dimension The simplest definition for a potential gradient F in one dimension is the following: where is some type of scalar potential and is displacement (not distance) in the direction, the subscripts label two different positions , and potentials at those points, . In the limit of infinitesimal displacements, the ratio of differences becomes a ratio of differentials: The direction of the electric potential gradient is from to . Three dimensions In three dimensions, Cartesian coordinates make it clear that the resultant potential gradient is the sum of the potential gradients in each direction: where are unit vectors in the directions. This can be compactly written in terms of the gradient operator , although this final form holds in any curvilinear coordinate system, not just Cartesian. This expression represents a significant feature of any conservative vector field , namely has a corresponding potential . Using Stokes' theorem, this is equivalently stated as meaning the curl, denoted ∇×, of the vector field vanishes. Physics Newtonian gravitation In the case of the gravitational field , which can be shown to be conservative, it is equal to the gradient in gravitational potential : There are opposite signs between gravitational field and potential, because the potential gradient and field are opposite in direction: as the potential increases, the gravitational field strength decreases and vice versa. Electromagnetism In electrostatics, the electric field is independent of time , so there is no induction of a time-dependent magnetic field by Faraday's law of induction: which implies is the gradient of the electric potential , identical to the classic" https://en.wikipedia.org/wiki/Load%20profile,"In electrical engineering, a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve. Power generation In a power system, a load curve or load profile is a chart illustrating the variation in demand/electrical load over a specific time. Generation companies use this information to plan how much power they will need to generate at any given time. A load duration curve is similar to a load curve. The information is the same but is presented in a different form. These curves are useful in the selection of generator units for supplying electricity. Electricity distribution In an electricity distribution grid, the load profile of electricity usage is important to the efficiency and reliability of power transmission. The power transformer or battery-to-grid are critical aspects of power distribution and sizing and modelling of batteries or transformers depends on the load profile. The factory specification of transformers for the optimization of load losses versus no-load losses is dependent directly on the characteristics of the load profile that the transformer is expected to be subjected to. This includes such characteristics as average load factor, diversity factor, utilization factor, and demand factor, which can all be calculated based on a given load profile. On the power market so-called EFA blocks are used to specify the traded forward contract on the delivery of a certain amount of electrical energy at a certain time. Retail energy markets In retail energy markets, supplier obligations are settled on an hourly or subhourly basis. For most customers, consumption is measured on a monthly basis, based on meter reading s" https://en.wikipedia.org/wiki/List%20of%20gene%20families,"This is a list of gene families or gene complexes, i.e. sets of genes which are related ancestrally and often serve similar biological functions. These gene families typically encode functionally related proteins, and sometimes the term gene families is a shorthand for the sets of proteins that the genes encode. They may or may not be physically adjacent on the same chromosome. Regulatory protein gene families 14-3-3 protein family Achaete-scute complex (neuroblast formation) FOX proteins (forkhead box proteins) Families containing homeobox domains DLX gene family Hox gene family POU family Krüppel-type zinc finger (ZNF) MADS-box gene family NOTCH2NL P300-CBP coactivator family SOX gene family Immune system proteins Immunoglobulin superfamily Major histocompatibility complex (MHC) Motor proteins Dynein Kinesin Myosin Signal transducing proteins G-proteins MAP Kinase Olfactory receptor Peroxiredoxin Receptor tyrosine kinases Transporters ABC transporters Antiporter Aquaporins Other families See also Protein family Housekeeping gene F Biological classification Gene families" https://en.wikipedia.org/wiki/Pharmacometabolomics,"Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Alternatively, pharmacometabolomics can be applied to measure metabolite levels following the administration of a pharmaceutical compound, in order to monitor the effects of the compound on certain metabolic pathways(pharmacodynamics). This provides detailed mapping of drug effects on metabolism and the pathways that are implicated in mechanism of variation of response to treatment. In addition, the metabolic profile of an individual at baseline (metabotype) provides information about how individuals respond to treatment and highlights heterogeneity within a disease state. All three approaches require the quantification of metabolites found in bodily fluids and tissue, such as blood or urine, and can be used in the assessment of pharmaceutical treatment options for numerous disease states. Goals of Pharmacometabolomics Pharmacometabolomics is thought to provide information that complements that gained from other omics, namely genomics, transcriptomics, and proteomics. Looking at the characteristics of an individual down through these different levels of detail, there is an increasingly more accurate prediction of a person's ability to respond to a pharmaceutical compound. The genome, made up of 25 000 genes, can indicate possible errors in drug metabolism; the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed; and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions. Pharmacometabolomics complements the omics with direct measureme" https://en.wikipedia.org/wiki/Gieseking%20manifold,"In mathematics, the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately . It was discovered by . The volume is called Gieseking constant and has a closed-form, with Clausen function . Compare to the related Catalan's constant which also manifests as a volume, The Gieseking manifold can be constructed by removing the vertices from a tetrahedron, then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0,1,2 to the face with vertices 3,1,0 in that order. Glue the face 0,2,3 to the face 3,2,1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is . The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together. The Gieseking manifold has a double cover homeomorphic to the figure-eight knot complement. The underlying compact manifold has a Klein bottle boundary, and the first homology group of the Gieseking manifold is the integers. The Gieseking manifold is a fiber bundle over the circle with fiber the once-punctured torus and monodromy given by The square of this map is Arnold's cat map and this gives another way to see that the Gieseking manifold is double covered by the complement of the figure-eight knot. See also List of mathematical constants" https://en.wikipedia.org/wiki/High%20availability,"High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period. Modernization has resulted in an increased reliance on these systems. For example, hospitals and data centers require high availability of their systems to perform routine daily activities. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable. Resilience High availability is a property of network resilience, the ability to ""provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."" Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected. The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network. These services include: supporting distributed processing supportin" https://en.wikipedia.org/wiki/Ultramicrotomy,"Ultramicrotomy is a method for cutting specimens into extremely thin slices, called ultra-thin sections, that can be studied and documented at different magnifications in a transmission electron microscope (TEM). It is used mostly for biological specimens, but sections of plastics and soft metals can also be prepared. Sections must be very thin because the 50 to 125 kV electrons of the standard electron microscope cannot pass through biological material much thicker than 150 nm. For best resolutions, sections should be from 30 to 60 nm. This is roughly the equivalent to splitting a 0.1 mm-thick human hair into 2,000 slices along its diameter, or cutting a single red blood cell into 100 slices. Ultramicrotomy process Ultra-thin sections of specimens are cut using a specialized instrument called an ""ultramicrotome"". The ultramicrotome is fitted with either a diamond knife, for most biological ultra-thin sectioning, or a glass knife, often used for initial cuts. There are numerous other pieces of equipment involved in the ultramicrotomy process. Before selecting an area of the specimen block to be ultra-thin sectioned, the technician examines semithin or ""thick"" sections range from 0.5 to 2 μm. These thick sections are also known as survey sections and are viewed under a light microscope to determine whether the right area of the specimen is in a position for thin sectioning. ""Ultra-thin"" sections from 50 to 100 nm thick are able to be viewed in the TEM. Tissue sections obtained by ultramicrotomy are compressed by the cutting force of the knife. In addition, interference microscopy of the cut surface of the blocks reveals that the sections are often not flat. With Epon or Vestopal as embedding medium the ridges and valleys usually do not exceed 0.5 μm in height, i.e., 5–10 times the thickness of ordinary sections (1). A small sample is taken from the specimen to be investigated. Specimens may be from biological matter, like animal or plant tissue, or from inorgani" https://en.wikipedia.org/wiki/Nanoprobing,"Nanoprobing is method of extracting device electrical parameters through the use of nanoscale tungsten wires, used primarily in the semiconductor industry. The characterization of individual devices is instrumental to engineers and integrated circuit designers during initial product development and debug. It is commonly utilized in device failure analysis laboratories to aid with yield enhancement, quality and reliability issues and customer returns. Commercially available nanoprobing systems are integrated into either a vacuum-based scanning electron microscope (SEM) or atomic force microscope (AFM). Nanoprobing systems that are based on AFM technology are referred to as Atomic Force nanoProbers (AFP). Principles and operation AFM based nanoprobers, enable up to eight probe tips to be scanned to generate high resolution AFM topography images, as well as Conductive AFM, Scanning Capacitance, and Electrostatic Force Microscopy images. Conductive AFM provides pico-amp resolution to identify and localize electrical failures such as shorts, opens, resistive contacts and leakage paths, enabling accurate probe positioning for current-voltage measurements. AFM based nanoprobers enable nanometer scale device defect localization and accurate transistor device characterization without the physical damage and electrical bias induced by high energy electron beam exposure. For SEM based nanoprobers, the ultra-high resolution of the microscopes that house the nanoprobing system allow the operator to navigate the probe tips with precise movement, allowing the user to see exactly where the tips will be landed, in real time. Existing nanoprobe needles or “probe tips” have a typical end-point radius ranging from 5 to 35 nm. The fine tips enable access to individual contacts nodes of modern IC transistors. Navigation of the probe tips in SEM based nanoprobers are typically controlled by precision piezoelectric manipulators. Typical systems have anywhere from 2 to 8 probe manipulato" https://en.wikipedia.org/wiki/Workgroup%20%28computer%20networking%29,"In computer networking a work group is collection of computers connected on a LAN that share the common resources and responsibilities. Workgroup is Microsoft's term for a peer-to-peer local area network. Computers running Microsoft operating systems in the same work group may share files, printers, or Internet connection. Work group contrasts with a domain, in which computers rely on centralized authentication. See also Windows for Workgroups – the earliest version of Windows to allow a work group Windows HomeGroup – a feature introduced in Windows 7 and later removed in Windows 10 (Version 1803) that allows work groups to share contents more easily Browser service – the service enabled 'browsing' all the resources in work groups Peer Name Resolution Protocol (PNRP) - IPv6-based dynamic name publication and resolution" https://en.wikipedia.org/wiki/Hybrid%20incompatibility,"Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species. Background Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents. Dobzhansky-Muller model The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid in" https://en.wikipedia.org/wiki/Math%20rock,"Math rock is a style of alternative and indie rock with roots in bands such as King Crimson and Rush. It is characterized by complex, atypical rhythmic structures (including irregular stopping and starting), counterpoint, odd time signatures, and extended chords. It bears similarities to post-rock. Characteristics Math rock is typified by its rhythmic complexity, seen as mathematical in character by listeners and critics. While most rock music uses a meter (however accented or syncopated), math rock makes use of more non-standard, frequently changing time signatures such as , , , or . As in traditional rock, the sound is most often dominated by guitars and drums. However, drums play a greater role in math rock in providing driving, complex rhythms. Math rock guitarists make use of tapping techniques and loop pedals to build on these rhythms, as illustrated by songs like those of ""math rock supergroup"" Battles. Lyrics are generally not the focus of math rock; the voice is treated as just another instrument in the mix. Often, vocals are not overdubbed, and are positioned less prominently, as in the recording style of Steve Albini. Many of math rock's best-known groups are entirely instrumental such as Don Caballero or Hella. The term began as a joke but has developed into the accepted name for the musical style. One advocate of this is Matt Sweeney, singer with Chavez, a group often linked to the math rock scene. Despite this, not all critics see math rock as a serious sub-genre of rock. A significant intersection exists between math rock and emo, exemplified by bands such as Tiny Moving Parts or American Football, whose sound has been described as ""twinkly, mathy rock, a sound that became one of the defining traits of the emo scene throughout the 2000s"". Bands Early The albums Red and Discipline by King Crimson, Spiderland by Slint are generally considered seminal influences on the development of math rock. The Canadian punk rock group Nomeansno (founded" https://en.wikipedia.org/wiki/Vector%20calculus%20identities,"The following are important identities involving derivatives and integrals in vector calculus. Operator notation Gradient For a function in three-dimensional Cartesian coordinate variables, the gradient is the vector field: where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables , also called a scalar field, the gradient is the vector field: where are orthogonal unit vectors in arbitrary directions. As the name implies, the gradient is proportional to and points in the direction of the function's most rapid (positive) change. For a vector field , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix: For a tensor field of any order k, the gradient is a tensor field of order k + 1. For a tensor field of order k > 0, the tensor field of order k + 1 is defined by the recursive relation where is an arbitrary constant vector. Divergence In Cartesian coordinates, the divergence of a continuously differentiable vector field is the scalar-valued function: As the name implies the divergence is a measure of how much vectors are diverging. The divergence of a tensor field of non-zero order k is written as , a contraction to a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity, where is the directional derivative in the direction of multiplied by its magnitude. Specifically, for the outer product of two vectors, For a tensor field of order k > 1, the tensor field of order k − 1 is defined by the recursive relation where is an arbitrary constant vector. Curl In Cartesian coordinates, for the curl is the vector field: where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively. As the name implies the curl is a measure of how much nearby vectors te" https://en.wikipedia.org/wiki/Circuit%20topology%20%28electrical%29,"The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. There may be numerous physical layouts and circuit diagrams that all amount to the same topology. Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network. Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory. Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks. Circuit diagrams The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, imped" https://en.wikipedia.org/wiki/LAVIS%20%28software%29,"LAVIS is a software tool created by the TOOL Corporation, Japan. LAVIS is a ""layout visualisation platform"". It supports a variety of formats such as GDSII, OASIS and LEF/DEF and can be used as a platform for common IC processes." https://en.wikipedia.org/wiki/Plasmid%20preparation,"A plasmid preparation is a method of DNA extraction and purification for plasmid DNA, it is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology. Many methods have been developed to purify plasmid DNA from bacteria. During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA. These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. Purification of plasmids is central to molecular cloning. A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells. Growth of the bacterial culture Plasmids are almost always purified from liquid bacteria cultures, usually E. coli, which have been transformed and isolated. Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker, for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow. Bacteria are grown under favourable conditions. Harvesting and lysis of the bacteria There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis. Alkaline lysis The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide, to lyse the bacterial cells. When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured; the plasmid DNA however, remains stable. Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA. After the addition o" https://en.wikipedia.org/wiki/Budan%27s%20theorem,"In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan de Boislaurent. A similar theorem was published independently by Joseph Fourier in 1820. Each of these theorems is a corollary of the other. Fourier's statement appears more often in the literature of 19th century and has been referred to as Fourier's, Budan–Fourier, Fourier–Budan, and even Budan's theorem Budan's original formulation is used in fast modern algorithms for real-root isolation of polynomials. Sign variation Let be a finite sequence of real numbers. A sign variation or sign change in the sequence is a pair of indices such that and either or for all such that . In other words, a sign variation occurs in the sequence at each place where the signs change, when ignoring zeros. For studying the real roots of a polynomial, the number of sign variations of several sequences may be used. For Budan's theorem, it is the sequence of the coefficients. For the Fourier's theorem, it is the sequence of values of the successive derivatives at a point. For Sturm's theorem it is the sequence of values at a point of the Sturm sequence. Descartes' rule of signs All results described in this article are based on Descartes' rule of signs. If is a univariate polynomial with real coefficients, let us denote by the number of its positive real roots, counted with their multiplicity, and by the number of sign variations in the sequence of its coefficients. Descartes's rule of signs asserts that is a nonnegative even integer. In particular, if , then one has . Budan's statement Given a univariate polynomial with real coefficients, let us denote by the number of real roots, counted with their multiplicities, of in a half-open interval (with real numbers). Let us denote also by the number of sign variations in the sequence of the coefficients of the polynomial" https://en.wikipedia.org/wiki/Derivation%20of%20the%20Routh%20array,"The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. The Cauchy index Given the system: Assuming no roots of lie on the imaginary axis, and letting = The number of roots of with negative real parts, and = The number of roots of with positive real parts then we have Expressing in polar form, we have where and from (2) note that where Now if the ith root of has a positive real part, then (using the notation y=(RE[y],IM[y])) and and Similarly, if the ith root of has a negative real part, and and From (9) to (11) we find that when the ith root of has a positive real part, and from (12) to (14) we find that when the ith root of has a negative real part. Thus, So, if we define then we have the relationship and combining (3) and (17) gives us and Therefore, given an equation of of degree we need only evaluate this function to determine , the number of roots with negative real parts and , the number of roots with positive real parts. In accordance with (6) and Figure 1, the graph of vs , varying over an interval (a,b) where and are integer multiples of , this variation causing the function to have increased by , indicates that in the course of travelling from point a to point b, has ""jumped"" from to one more time than it has jumped from to . Similarly, if we vary over an interval (a,b) this variation causing to have decreased by , where again is a multiple of at both and , implies that has jumped from to one more time than it has jumped from to as was varied over the said interval. Thus, is times the difference between the number of points at which jumps from to and the number of points at which jumps from to as " https://en.wikipedia.org/wiki/Codex%20Alimentarius,"The is a collection of internationally recognized standards, codes of practice, guidelines, and other recommendations published by the Food and Agriculture Organization of the United Nations relating to food, food production, food labeling, and food safety. History and governance Its name is derived from the Codex Alimentarius Austriacus. Its texts are developed and maintained by the Codex Alimentarius Commission (CAC), a body established in early November 1961 by the Food and Agriculture Organization of the United Nations (FAO), was joined by the World Health Organization (WHO) in June 1962, and held its first session in Rome in October 1963. The Commission's main goals are to protect the health of consumers, to facilitate international trade, and ensure fair practices in the international food trade. The CAC is an intergovernmental organization; the member states of the FAO and WHO send delegations to the CAC. As of 2021, there were 189 members of the CAC (188 member countries plus one member organization, the European Union (EU) and 239 Codex observers (59 intergovernmental organizations, 164 non-governmental organizations, and 16 United Nations organizations). The CAC develops food standards on scientific evidence furnished by the scientific committees of the FAO and WHO; the oldest of these, the Joint FAO/WHO Expert Committee on Food Additives (JECFA), was established in 1956 and predates the establishment of the CAC itself. According to a 2013 study, the CAC's primary functions are ""establishing international food standards for approved food additives providing maximum levels in foods, maximum limits for contaminants and toxins, maximum residue limits for pesticides and for veterinary drugs used in veterinary animals, and establishing hygiene and technological function practice codes"". The CAC does not have regulatory authority, and the Codex Alimentarius is a reference guide, not an enforceable standard on its own. However, several nations adopt the Co" https://en.wikipedia.org/wiki/Lichenology,"Lichenology is the branch of mycology that studies the lichens, symbiotic organisms made up of an intimate symbiotic association of a microscopic alga (or a cyanobacterium) with a filamentous fungus. Study of lichens draws knowledge from several disciplines: mycology, phycology, microbiology and botany. Scholars of lichenology are known as lichenologists. History The beginnings Lichens as a group have received less attention in classical treatises on botany than other groups although the relationship between humans and some species has been documented from early times. Several species have appeared in the works of Dioscorides, Pliny the Elder and Theophrastus although the studies are not very deep. During the first centuries of the modern age they were usually put forward as examples of spontaneous generation and their reproductive mechanisms were totally ignored. For centuries naturalists had included lichens in diverse groups until in the early 18th century a French researcher Joseph Pitton de Tournefort in his Institutiones Rei Herbariae grouped them into their own genus. He adopted the Latin term lichen, which had already been used by Pliny who had imported it from Theophrastus but up until then this term had not been widely employed. The original meaning of the Greek word λειχήν (leichen) was moss that in its turn derives from the Greek verb λείχω (liekho) to suck because of the great ability of these organisms to absorb water. In its original use the term signified mosses, liverworts as well as lichens. Some forty years later Dillenius in his Historia Muscorum made the first division of the group created by Tournefort separating the sub-families Usnea, Coralloides and Lichens in response to the morphological characteristics of the lichen thallus. After the revolution in taxonomy brought in by Linnaeus and his new system of classification lichens are retained in the Plant Kingdom forming a single group Lichen with eight divisions within the group according" https://en.wikipedia.org/wiki/Plasmaron,"In physics, the plasmaron was proposed by Lundqvist in 1967 as a quasiparticle arising in a system that has strong plasmon-electron interactions. In the original work, the plasmaron was proposed to describe a secondary peak (or satellite) in the photoemission spectral function of the electron gas. More precisely it was defined as an additional zero of the quasi-particle equation . The same authors pointed out, in a subsequent work, that this extra solution might be an artifact of the used approximations: A more mathematical discussion is provided. The plasmaron was also studied in more recent works in the literature. It was shown, also with the support of the numerical simulations, that the plasmaron energy is an artifact of the approximation used to numerically compute the spectral function, e.g. solution of the dyson equation for the many body green function with a frequency dependent GW self-energy. This approach give rise to a wrong plasmaron peak instead of the plasmon satellite which can be measured experimentally. Despite this fact, experimental observation of a plasmaron was reported in 2010 for graphene. Also supported by earlier theoretical work. However subsequent works discussed that the theoretical interpretation of the experimental measure was not correct, in agreement with the fact that the plasmaron is only an artifact of the GW self-energy used with the Dyson equation. The artificial nature of the plasmaron peak was also proven via the comparison of experimental and numerical simulations for the photo-emission spectrum of bulk silicon. Other works on plasmaron have been published in the literature. Observation of plasmaron peaks have also been reported in optical measurements of elemental bismuth and in other optical measurements." https://en.wikipedia.org/wiki/Signal%20regeneration,"In telecommunications, signal regeneration is signal processing that restores a signal, recovering its original characteristics. The signal may be electrical, as in a repeater on a T-carrier line, or optical, as in an OEO optical cross-connect. The process is used when it is necessary to change the signal type in order to transmit it via different media. Once it comes back to the original medium the signal is usually required to be regenerated so as to bring it back to its original state. See also Fiber-optic communication#Regeneration" https://en.wikipedia.org/wiki/ILAND%20project,"The iLAND project (middleware for deterministic dynamically reconfigurable networked embedded systems) is a cross-industry research & development project for advanced research in embedded systems. It has been developed with the collaboration of 9 organisations including Industries, SMEs and Universities from Spain, France, Portugal, Netherlands and a university from United States. The project is co-funded by the ARTEMIS Programme related to the topic: 'SP5 Computing Environments for Embedded Systems'. Middleware functionalities The merging of the real-time systems and the service-oriented architectures enables more flexible a dynamic distributed systems with real time features. So a number of functionalities have been identified to create a SoA based middleware for deterministic reconfiguration of service-based applications: Service registration/deregistration: Stores in the system the functionalities and the description of the different services. Service discovery: Enables external actor to discover the services currently stored in the system. Service composition: Creates the service-based application on run-time. Service orchestration: Manages the invocation of the different services. Service based admission test: This functionality checks if there are enough resources for the services execution in the distributed system. Resource reservation: This functionality acquires the necessary resources in the host machine and the network. System monitoring: This functionality measures if the resources required for the execution of services are not being exhausted. System reconfiguration: This functionality changes the services currently running on the system by other services providing same functionality. Middleware architecture The architecture of the iLAND middleware consists in two layers. The high level one is the Core Functionality Layer. It is oriented to the management of the real time service model. The low layer creates bridges to the system resourc" https://en.wikipedia.org/wiki/Essentially%20unique,"In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is ""unique"" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of ""sameness"", which is often formalized using an equivalence relation. A related notion is a universal property, where an object is not only essentially unique, but unique up to a unique isomorphism (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object. Examples Set theory At the most basic level, there is an essentially unique set of any given cardinality, whether one labels the elements or . In this case, the non-uniqueness of the isomorphism (e.g., match 1 to or 1 to ) is reflected in the symmetric group. On the other hand, there is an essentially unique ordered set of any given finite cardinality: if one writes and , then the only order-preserving isomorphism is the one which maps 1 to , 2 to , and 3 to . Number theory The fundamental theorem of arithmetic establishes that the factorization of any positive integer into prime numbers is essentially unique, i.e., unique up to the ordering of the prime factors. Group theory In the context of classification of groups, there is an essentially unique group containing exactly 2 elements. Similarly, there is also an essentially unique group containing exactly 3 elements: the cyclic group of order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to be isomorphic to each other, and hence are ""the same"". On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and the Klein four group. Measure theory There is an essentially " https://en.wikipedia.org/wiki/Hume%20%28programming%20language%29,"Hume is a functionally based programming language developed at the University of St Andrews and Heriot-Watt University in Scotland since the year 2000. The language name is both an acronym meaning 'Higher-order Unified Meta-Environment' and an honorific to the 18th-century philosopher David Hume. It targets real-time computing embedded systems, aiming to produce a design that is both highly abstract, and yet allows precise extraction of time and space execution costs. This allows guaranteeing the bounded time and space demands of executing programs. Hume combines functional programming ideas with ideas from finite state automata. Automata are used to structure communicating programs into a series of ""boxes"", where each box maps inputs to outputs in a purely functional way using high-level pattern-matching. It is structured as a series of levels, each of which exposes different machine properties. Design model The Hume language design attempts to maintain the essential properties and features required by the embedded systems domain (especially for transparent time and space costing) whilst incorporating as high a level of program abstraction as possible. It aims to target applications ranging from simple microcontrollers to complex real-time systems such as smartphones. This ambitious goal requires incorporating both low-level notions such as interrupt handling, and high-level ones of data structure abstraction etc. Such systems are programmed in widely differing ways, but the language design should accommodate such varying requirements. Hume is a three-layer language: an outer (static) declaration/metaprogramming layer, an intermediate coordination layer describing a static layout of dynamic processes and the associated devices, and an inner layer describing each process as a (dynamic) mapping from patterns to expressions. The inner layer is stateless and purely functional. Rather than attempting to apply cost modeling and correctness proving technology to an " https://en.wikipedia.org/wiki/Computational%20complexity%20of%20mathematical%20operations,"The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used. Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm. Arithmetic functions This table lists the complexity of mathematical operations on integers. On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two -bit numbers in time O(n). Algebraic functions Here we consider operations over polynomials and denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers. Special functions Many of the methods in this section are given in Borwein & Borwein. Elementary functions The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions. Below, the size refers to the number of digits of precision at which the function is to be evaluated. It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound . Non-elementary functions Mathematical constants This table gives the complexity of computing approximations to the given constants to correct digits. Number theory Algorithms for number theoretical cal" https://en.wikipedia.org/wiki/IEC%2061131,"IEC 61131 is an IEC standard for programmable controllers. It was first published in 1993; the current (third) edition dates from 2013. It was known as IEC 1131 before the change in numbering system by IEC. The parts of the IEC 61131 standard are prepared and maintained by working group 7, programmable control systems, of subcommittee SC 65B of Technical Committee TC65 of the IEC. Sections of IEC 61131 Standard IEC 61131 is divided into several parts: Part 1: General information. It is the introductory chapter; it contains definitions of terms that are used in the subsequent parts of the standard and outlines the main functional properties and characteristics of PLCs. Part 2: Equipment requirements and tests - establishes the requirements and associated tests for programmable controllers and their peripherals. This standard prescribes: the normal service conditions and requirements (for example, requirements related with climatic conditions, transport and storage, electrical service, etc.); functional requirements (power supply & memory, digital and analog I/Os); functional type tests and verification (requirements and tests on environmental, vibration, drop, free fall, I/O, power ports, etc.) and electromagnetic compatibility (EMC) requirements and tests that programmable controllers must implement. This standard can serve as a basis in the evaluation of safety programmable controllers to IEC 61508. Part 3: Programming languages Part 4: User guidelines Part 5: Communications Part 6: Functional safety Part 7: Fuzzy control programming Part 8: Guidelines for the application and implementation of programming languages Part 9: Single-drop digital communication interface for small sensors and actuators (SDCI, marketed as IO-Link) Part 10: PLC open XML exchange format for the export and import of IEC 61131-3 projects Related standards IEC 61499 Function Block PLCopen has developed several standards and working groups. TC1 - Standards TC2 - Functions TC3" https://en.wikipedia.org/wiki/List%20of%20Laplace%20transforms,"The following is a list of Laplace transforms for many common functions of a single variable. The Laplace transform is an integral transform that takes a function of a positive real variable (often time) to a function of a complex variable (frequency). Properties The Laplace transform of a function can be obtained using the formal definition of the Laplace transform. However, some properties of the Laplace transform can be used to obtain the Laplace transform of some functions more easily. Linearity For functions and and for scalar , the Laplace transform satisfies and is, therefore, regarded as a linear operator. Time shifting The Laplace transform of is . Frequency shifting is the Laplace transform of . Explanatory notes The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, . The entries of the table that involve a time delay are required to be causal (meaning that ). A causal system is a system where the impulse response is zero for all time prior to . In general, the region of convergence for causal systems is not the same as that of anticausal systems. The following functions and variables are used in the table below: represents the Dirac delta function. represents the Heaviside step function. Literature may refer to this by other notation, including or . represents the Gamma function. is the Euler–Mascheroni constant. is a real number. It typically represents time, although it can represent any independent dimension. is the complex frequency domain parameter, and is its real part. is an integer. are real numbers. is a complex number. Table See also List of Fourier transforms" https://en.wikipedia.org/wiki/PowerEdge%20VRTX,"Dell PowerEdge VRTX is a computer hardware product line from Dell. It is a mini-blade chassis with built-in storage system. The VRTX comes in two models: a 19"" rack version that is 5 rack units high or as a stand-alone tower system. Specifications The VRTX system is partially based on the Dell M1000e blade-enclosure and shares some technologies and components. There are also some differences with that system. The M1000e can support an EqualLogic storage area network that connects the servers to the storage via iSCSI, while the VRTX uses a shared PowerEdge RAID Controller (6Gbit PERC8). A second difference is the option to add certain PCIe cards (Gen2 support) and assign them to any of the four servers. Servers: The VRTX chassis has 4 half-height slots available for Ivy-Bridge based PowerEdge blade servers. At launch the PE-M520 (Xeon E5-2400v2) and the PE-M620 (Xeon E5-2600v2) were the only two supported server blades, however the M520 was since discontinued. The same blades are used in the M1000e but for use in the VRTX they need to run specific configuration, using two PCIe 2.0 mezzanine cards per server. A conversion kit is available from Dell to allow moving a blade from a M1000e to VRTX chassis. Storage: The VRTX chassis includes shared storage slots that connect to a single or dual PERC 8 controller(s) via switched 6Gbit SAS. This controller which is managed through the CMC allows RAID groups to be configured and then allows for those RAID groups to be subdivided into individual virtual disks that can be presented out to either single or multiple blades. The shared storage slots are either 12 x 3.5"" HDD slots or 24 x 2.5"" HDD slots depending on the VRTX chassis purchased. Dell offers 12Gbit SAS disks for the VRTX, but these will operate at the slower 6Gbit rate for compatibility with the older PERC8 and SAS switches. Networking: The VRTX chassis has a built in IOM for supporting ethernet traffic to the server blades. At present the options for this IOM ar" https://en.wikipedia.org/wiki/AY-3-8500,"The AY-3-8500 ""Ball & Paddle"" integrated circuit was the first in a series of ICs from General Instrument designed for the consumer video game market. These chips were designed to output video to an RF modulator, which would then display the game on a domestic television set. The AY-3-8500 contained six selectable games — tennis (a.k.a. Pong), hockey (or soccer), squash, practice, and two shooting games. The AY-3-8500 was the 625-line PAL version and the AY-3-8500-1 was the 525-line NTSC version. It was introduced in 1976, Coleco becoming the first customer having been introduced to the IC development by Ralph H. Baer. A minimum number of external components were needed to build a complete system. The AY-3-8500 was the first version. It played seven Pong variations. The video was in black-and-white, although it was possible to colorize the game by using an additional chip, such as the AY-3-8515. Games Six selectable games for one or two players were included: In addition, a seventh undocumented game could be played when none of the previous six was selected: Handicap, a hockey variant where the player on the right has a third paddle. This game was implemented on very few systems. Usage The AY-3-8500 was designed to be powered by six 1.5 V cells (9 V). Its specified operation is at 6-7 V and a maximum of 12 V instead of the 5 V standard for logic. The nominal clock was 2.0 MHz, yielding a 500 ns pixel width. One way to generate such a clock is to divide a 14.31818 MHz 4 × colorburst clock by 7, producing 2.04545 MHz. It featured independent video outputs for left player, right player, ball, and playground+counter, that were summed using resistors, allowing designers to use a different luminance for each one. It was housed in a standard 28-pin DIP. Applications Some of the dedicated consoles employing the AY-3-8500 (there are at least two hundred different consoles using this chip): Sears Hockey Pong Coleco Telstar series (Coleco Telstar, Coleco Telstar Clas" https://en.wikipedia.org/wiki/Network%20delay,"Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts: Processing delay time it takes a router to process the packet header Queuing delay time the packet spends in routing queues Transmission delay time it takes to push the packet's bits onto the link Propagation delay time for a signal to propagate through the media A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds. See also Age of Information End-to-end delay Lag (video games) Latency (engineering) Minimum-Pairs Protocol Round-trip delay" https://en.wikipedia.org/wiki/Hybrid%20Scheduling,"Hybrid Scheduling is a class of scheduling mechanisms that mix different scheduling criteria or disciplines in one algorithm. For example, scheduling uplink and downlink traffic in a WLAN (Wireless Local Area Network, such as IEEE 802.11e) using a single discipline or framework is an instance of hybrid scheduling. Other examples include a scheduling scheme that can provide differentiated and integrated (guaranteed) services in one discipline. Another example could be scheduling of node communications where centralized communications and distributed communications coexist. Further examples of such schedulers are found in the following articles:" https://en.wikipedia.org/wiki/Foias%20constant,"In mathematical analysis, the Foias constant is a real number named after Ciprian Foias. It is defined in the following way: for every real number x1 > 0, there is a sequence defined by the recurrence relation for n = 1, 2, 3, .... The Foias constant is the unique choice α such that if x1 = α then the sequence diverges to infinity. For all other values of x1, the sequence is divergent as well, but it has two accumulation points: 1 and infinity. Numerically, it is . No closed form for the constant is known. When x1 = α then the growth rate of the sequence (xn) is given by the limit where ""log"" denotes the natural logarithm. The same methods used in the proof of the uniqueness of the Foias constant may also be applied to other similar recursive sequences. See also Mathematical constant Notes and references Mathematical analysis Mathematical constants" https://en.wikipedia.org/wiki/Gerontology,"Gerontology ( ) is the study of the social, cultural, psychological, cognitive, and biological aspects of aging. The word was coined by Ilya Ilyich Mechnikov in 1903, from the Greek (), meaning ""old man"", and (), meaning ""study of"". The field is distinguished from geriatrics, which is the branch of medicine that specializes in the treatment of existing disease in older adults. Gerontologists include researchers and practitioners in the fields of biology, nursing, medicine, criminology, dentistry, social work, physical and occupational therapy, psychology, psychiatry, sociology, economics, political science, architecture, geography, pharmacy, public health, housing, and anthropology. The multidisciplinary nature of gerontology means that there are a number of sub-fields which overlap with gerontology. There are policy issues, for example, involved in government planning and the operation of nursing homes, investigating the effects of an aging population on society, and the design of residential spaces for older people that facilitate the development of a sense of place or home. Dr. Lawton, a behavioral psychologist at the Philadelphia Geriatric Center, was among the first to recognize the need for living spaces designed to accommodate the elderly, especially those with Alzheimer's disease. As an academic discipline the field is relatively new. The USC Leonard Davis School of Gerontology created the first PhD, master's and bachelor's degree programs in gerontology in 1975. History In the medieval Islamic world, several physicians wrote on issues related to Gerontology. Avicenna's The Canon of Medicine (1025) offered instruction for the care of the aged, including diet and remedies for problems including constipation. Arabic physician Ibn Al-Jazzar Al-Qayrawani (Algizar, c. 898–980) wrote on the aches and conditions of the elderly. His scholarly work covers sleep disorders, forgetfulness, how to strengthen memory, and causes of mortality. Ishaq ibn Hunayn (died 910" https://en.wikipedia.org/wiki/Contact%20pad,"Contact pads or bond pads are small, conductive surface areas of a printed circuit board (PCB) or die of an integrated circuit. They are often made of gold, copper, or aluminum and measure mere micrometres wide. Pads are positioned on the edges of die, to facilitate connections without shorting. Contact pads exist to provide a larger surface area for connections to a microchip or PCB, allowing for the input and output of data and power. Possible methods of connecting contact pads to a system include soldering, wirebonding, or flip chip mounting. Contact pads are created alongside a chip's functional structure during the photolithography steps of the fabrication process, and afterwards they are tested. During the test process, contact pads are probed with the needles of a probe card on Automatic Test Equipment in order to check for faults via electrical resistance. Further reading Kraig Mitzner, Complete PCB Design Using OrCAD Capture and PCB Editor, Newnes, 2009 . Jing Li, Evaluation and Improvement of the Robustness of a PCB Pad in a Lead-free Environment, ProQuest, 2007 . Deborah Lea, Fredirikus Jonck, Christopher Hunt, Solderability Measurements of PCB Pad Finishes and Geometries, National Physical Laboratory, 2001 . Electronic engineering Printed circuit board manufacturing" https://en.wikipedia.org/wiki/Outline%20of%20software%20engineering,"The following outline is provided as an overview of and topical guide to software engineering: Software engineering – application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is the application of engineering to software. The ACM Computing Classification system is a poly-hierarchical ontology that organizes the topics of the field and can be used in semantic web applications and as a de facto standard classification system for the field. The major section ""Software and its Engineering"" provides an outline and ontology for software engineering. Software applications Software engineers build software (applications, operating systems, system software) that people use. Applications influence software engineering by pressuring developers to solve problems in new ways. For example, consumer software emphasizes low cost, medical software emphasizes high quality, and Internet commerce software emphasizes rapid development. Business software Accounting software Analytics Data mining closely related to database Decision support systems Airline reservations Banking Automated teller machines Cheque processing Credit cards Commerce Trade Auctions (e.g. eBay) Reverse auctions (procurement) Bar code scanners Compilers Parsers Compiler optimization Interpreters Linkers Loaders Communication E-mail Instant messengers VOIP Calendars — scheduling and coordinating Contact managers Computer graphics Animation Special effects for video and film Editing Post-processing Cryptography Databases, support almost every field Embedded systems Both software engineers and traditional engineers write software control systems for embedded products. Automotive software Avionics software Heating ventilating and air conditioning (HVAC) software Medical device software Telephony Telemetry Engineering All traditional engineering branches use software extensively. Engineers use spreadsheets, more than they ever used calculators" https://en.wikipedia.org/wiki/Biocybernetics,"Biocybernetics is the application of cybernetics to biological science disciplines such as neurology and multicellular systems. Biocybernetics plays a major role in systems biology, seeking to integrate different levels of information to understand how biological systems function. The field of cybernetics itself has origins in biological disciplines such as neurophysiology. Biocybernetics is an abstract science and is a fundamental part of theoretical biology, based upon the principles of systemics. Biocybernetics is a psychological study that aims to understand how the human body functions as a biological system and performs complex mental functions like thought processing, motion, and maintaining homeostasis.(PsychologyDictionary.org)Within this field, many distinct qualities allow for different distinctions  within the cybernetic groups such as humans and insects such as beehives and ants. Humans work together but they also have individual thoughts that allow them to act on their own, while worker bees follow the commands of the queen bee.  (Seeley, 1989). Although humans often work together, they can also separate from the group and think for themselves.(Gackenbach, J. 2007) A unique example of this within the human sector of biocybernetics would be in society during the colonization period, when Great Britain established their colonies in North America and Australia. Many of the traits and qualities of the mother country were inherited by the colonies, as well as niche qualities that were unique to them based on their areas like language and personality—similar vines and grasses, where the parent plant produces offshoots, spreading from the core.  Once the shoots grow their roots and get separated from the mother plant, they will survive independently and be considered their plant. Society is more closely related to plants than to animals since, like plants, there is no distinct separation between parent and offspring. The branching of society is more similar t" https://en.wikipedia.org/wiki/Virus%20classification,"Virus classification is the process of naming viruses and placing them into a taxonomic system similar to the classification systems used for cellular organisms. Viruses are classified by phenotypic characteristics, such as morphology, nucleic acid type, mode of replication, host organisms, and the type of disease they cause. The formal taxonomic classification of viruses is the responsibility of the International Committee on Taxonomy of Viruses (ICTV) system, although the Baltimore classification system can be used to place viruses into one of seven groups based on their manner of mRNA synthesis. Specific naming conventions and further classification guidelines are set out by the ICTV. A catalogue of all the world's known viruses has been proposed and, in 2013, some preliminary efforts were underway. Definitions Species definition Species form the basis for any biological classification system. Before 1982, it was thought that viruses could not be made to fit Ernst Mayr's reproductive concept of species, and so were not amenable to such treatment. In 1982, the ICTV started to define a species as ""a cluster of strains"" with unique identifying qualities. In 1991, the more specific principle that a virus species is a polythetic class of viruses that constitutes a replicating lineage and occupies a particular ecological niche was adopted. In July 2013, the ICTV definition of species changed to state: ""A species is a monophyletic group of viruses whose properties can be distinguished from those of other species by multiple criteria."" These criteria include the structure of the capsid, the existence of an envelope, the gene expression program for its proteins, host range, pathogenicity, and most importantly genetic sequence similarity and phylogenetic relationship. The actual criteria used vary by the taxon, and can be inconsistent (arbitrary similarity thresholds) or unrelated to lineage (geography) at times. The matter is, for many, not yet settled. Virus defi" https://en.wikipedia.org/wiki/Mathematical%20diagram,"Mathematical diagrams, such as charts and graphs, are mainly designed to convey mathematical relationships—for example, comparisons over time. Specific types of mathematical diagrams Argand diagram A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by Norwegian-Danish land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane. The concept of the complex plane allows a geometric interpretation of complex numbers. Under addition, they add like vectors. The multiplication of two complex numbers can be expressed most easily in polar coordinates — the magnitude or modulus of the product is the product of the two absolute values, or moduli, and the angle or argument of the product is the sum of the two angles, or arguments. In particular, multiplication by a complex number of modulus 1 acts as a rotation. Butterfly diagram In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name ""butterfly"" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states. The butterfly diagram show a data-flow diagram connecting the inputs x (left) to the outputs y that depend on them (right) for a ""butterfly"" step of a radix-2 Cooley–Tukey FFT algorithm. This diagram resembles a butterfly as in the Morpho butterfly shown for comparison, hence the name. Commutative diagram In " https://en.wikipedia.org/wiki/Rent%27s%20rule,"Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of ""pins"") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates). E. F. Rent's discovery and first publications In the 1960s, E. F. Rent, an IBM employee, found a remarkable trend between the number of pins (terminals, T) at the boundaries of integrated circuit designs at IBM and the number of internal components (g), such as logic gates or standard cells. On a log–log plot, these datapoints were on a straight line, implying a power-law relation , where t and p are constants (p < 1.0, and generally 0.5 < p < 0.8). Rent's findings in IBM-internal memoranda were published in the IBM Journal of Research and Development in 2005, but the relation was described in 1971 by Landman and Russo. They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resulting T versus g plot and named it ""Rent's rule"". Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures. Theoretical basis Christie and Stroobandt later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved in placement is reflected by the paramete" https://en.wikipedia.org/wiki/Substrate%20%28biology%29,"In biology, a substrate is the surface on which an organism (such as a plant, fungus, or animal) lives. A substrate can include biotic or abiotic materials and animals. For example, encrusting algae that lives on a rock (its substrate) can be itself a substrate for an animal that lives on top of the algae. Inert substrates are used as growing support materials in the hydroponic cultivation of plants. In biology substrates are often activated by the nanoscopic process of substrate presentation. In agriculture and horticulture Cellulose substrate Expanded clay aggregate (LECA) Rock wool Potting soil Soil In animal biotechnology Requirements for animal cell and tissue culture Requirements for animal cell and tissue culture are the same as described for plant cell, tissue and organ culture (In Vitro Culture Techniques: The Biotechnological Principles). Desirable requirements are (i) air conditioning of a room, (ii) hot room with temperature recorder, (iii) microscope room for carrying out microscopic work where different types of microscopes should be installed, (iv) dark room, (v) service room, (vi) sterilization room for sterilization of glassware and culture media, and (vii) preparation room for media preparation, etc. In addition the storage areas should be such where following should be kept properly : (i) liquids-ambient (4-20°C), (ii) glassware-shelving, (iii) plastics-shelving, (iv) small items-drawers, (v) specialized equipments-cupboard, slow turnover, (vi) chemicals-sidled containers. For cell growth There are many types of vertebrate cells that require support for their growth in vitro otherwise they will not grow properly. Such cells are called anchorage-dependent cells. Therefore, many substrates which may be adhesive (e.g. plastic, glass, palladium, metallic surfaces, etc.) or non-adhesive (e.g. agar, agarose, etc.) types may be used as discussed below: Plastic as a substrate. Disposable plastics are cheaper substrate as they are commonly made" https://en.wikipedia.org/wiki/Fuzzy%20electronics,"Fuzzy electronics is an electronic technology that uses fuzzy logic, instead of the two-state Boolean logic more commonly used in digital electronics. Fuzzy electronics is fuzzy logic implemented on dedicated hardware. This is to be compared with fuzzy logic implemented in software running on a conventional processor. Fuzzy electronics has a wide range of applications, including control systems and artificial intelligence. History The first fuzzy electronic circuit was built by Takeshi Yamakawa et al. in 1980 using discrete bipolar transistors. The first industrial fuzzy application was in a cement kiln in Denmark in 1982. The first VLSI fuzzy electronics was by Masaki Togai and Hiroyuki Watanabe in 1984. In 1987, Yamakawa built the first analog fuzzy controller. The first digital fuzzy processors came in 1988 by Togai (Russo, pp. 2-6). In the early 1990s, the first fuzzy logic chips were presented to the public. Two companies which are Omron and NEC have announced the development of dedicated fuzzy electronic hardware in the year 1991. Two years later, the japanese Omron Cooperation has shown a working fuzzy chip during a technical fair. See also Defuzzification Fuzzy set Fuzzy set operations" https://en.wikipedia.org/wiki/Photoheterotroph,"Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply. Research Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span. Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII). Metabolism Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane" https://en.wikipedia.org/wiki/List%20of%20order%20structures%20in%20mathematics,"In mathematics, and more specifically in order theory, several different types of ordered set have been studied. They include: Cyclic orders, orderings in which triples of elements are either clockwise or counterclockwise Lattices, partial orders in which each pair of elements has a greatest lower bound and a least upper bound. Many different types of lattice have been studied; see map of lattices for a list. Partially ordered sets (or posets), orderings in which some pairs are comparable and others might not be Preorders, a generalization of partial orders allowing ties (represented as equivalences and distinct from incomparabilities) Semiorders, partial orders determined by comparison of numerical values, in which values that are too close to each other are incomparable; a subfamily of partial orders with certain restrictions Total orders, orderings that specify, for every two distinct elements, which one is less than the other Weak orders, generalizations of total orders allowing ties (represented either as equivalences or, in strict weak orders, as transitive incomparabilities) Well-orders, total orders in which every non-empty subset has a least element Well-quasi-orderings, a class of preorders generalizing the well-orders See also Glossary of order theory List of order theory topics Mathematics-related lists Order theory" https://en.wikipedia.org/wiki/Off-flavour,"Off-flavours or off-flavors (see spelling differences) are taints in food products caused by the presence of undesirable compounds. They can originate in raw materials, from chemical changes during food processing and storage, and from micro-organisms. Off-flavours are a recurring issue in drinking water supply and many food products. Water bodies are often affected by geosmin and 2-methylisoborneol, affecting the flavour of water for drinking and of fish growing in that water. Haloanisoles similarly affect water bodies, and are a recognised cause of off-flavour in wine. Cows grazing on weeds such as wild garlic can produce a ‘weedy’ off-flavour in milk. Many more examples can be seen throughout food production sectors including in oats, coffee, glucose syrup and brewing." https://en.wikipedia.org/wiki/Point%20process%20notation,"In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both. The notation varies due to the histories of certain mathematical fields and the different interpretations of point processes, and borrows notation from mathematical areas of study such as measure theory and set theory. Interpretation of point processes The notation, as well as the terminology, of point processes depends on their setting and interpretation as mathematical objects which under certain assumptions can be interpreted as random sequences of points, random sets of points or random counting measures. Random sequences of points In some mathematical frameworks, a given point process may be considered as a sequence of points with each point randomly positioned in d-dimensional Euclidean space Rd as well as some other more abstract mathematical spaces. In general, whether or not a random sequence is equivalent to the other interpretations of a point process depends on the underlying mathematical space, but this holds true for the setting of finite-dimensional Euclidean space Rd. Random set of points A point process is called simple if no two (or more points) coincide in location with probability one. Given that often point processes are simple and the order of the points does not matter, a collection of random points can be considered as a random set of points The theory of random sets was independently developed by David Kendall and Georges Matheron. In terms of being considered as a random set, a sequence of random points is a random closed set if the sequence has no accumulation points with probability one A point process is often denoted by a single l" https://en.wikipedia.org/wiki/List%20of%20theorems%20called%20fundamental,"In mathematics, a fundamental theorem is a theorem which is considered to be central and conceptually important for some topic. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory. Some of these are classification theorems of objects which are mainly dealt with in the field. For instance, the fundamental theorem of curves describe classification of regular curves in space up to translation and rotation. Likewise, the mathematical literature sometimes refers to the fundamental lemma of a field. The term lemma is conventionally used to denote a proven proposition which is used as a stepping stone to a larger result, rather than as a useful statement in-and-of itself. Fundamental theorems of mathematical topics Fundamental theorem of algebra Fundamental theorem of algebraic K-theory Fundamental theorem of arithmetic Fundamental theorem of Boolean algebra Fundamental theorem of calculus Fundamental theorem of calculus for line integrals Fundamental theorem of curves Fundamental theorem of cyclic groups Fundamental theorem of dynamical systems Fundamental theorem of equivalence relations Fundamental theorem of exterior calculus Fundamental theorem of finitely generated abelian groups Fundamental theorem of finitely generated modules over a principal ideal domain Fundamental theorem of finite distributive lattices Fundamental theorem of Galois theory Fundamental theorem of geometric calculus Fundamental theorem on homomorphisms Fundamental theorem of ideal theory in number fields Fundamental theorem of Lebesgue integral calculus Fundamental theorem of linear algebra Fundamental theorem of linear programming Fundamental theorem of noncommutative algebra Fundamental theorem of projective geometry Fundamental theorem of random fields Fu" https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20jargon,"The language of mathematics has a vast vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this is common English, but with a specific non-obvious meaning when used in a mathematical sense. Some phrases, like ""in general"", appear below in more than one section. Philosophy of mathematics abstract nonsenseA tongue-in-cheek reference to category theory, using which one can employ arguments that establish a (possibly concrete) result without reference to any specifics of the present problem. For that reason, it's also known as general abstract nonsense or generalized abstract nonsense. canonicalA reference to a standard or choice-free presentation of some mathematical object (e.g., canonical map, canonical form, or canonical ordering). The same term can also be used more informally to refer to something ""standard"" or ""classic"". For example, one might say that Euclid's proof is the ""canonical proof"" of the infinitude of primes. deepA result is called ""deep"" if its proof requires concepts and methods that are advanced beyond the concepts needed to formulate the result. For example, the prime number theorem — originally proved using techniques of complex analysis — was once thought to be a deep result until elementary proofs were found. On the other hand, the fact that π is irrational is usually known to be a deep result, because it requires a considerable development of real analysis before the proof can be established — even though the claim itself can be stated in terms of simple number theory and geometry. elegantAn aesthetic term referring to the ability of an idea to provide insight into mathematics, whether by unifying disparate fields, introducing a new perspective on a single field, or by providing a " https://en.wikipedia.org/wiki/Nessum,"Nessum is a communication technology that can be used in a variety of media, including wired, wireless, and underwater, using high frequencies (kHz to MHz bands). It is standardized as IEEE P1901c. Overview Nessum has two types of communication: wired (Nessum WIRE) and wireless (Nessum AIR). Wired communication Nessum WIRE can be used for various types of lines such as power lines, twisted pair lines, coaxial cable lines, and telephone lines. The communication distance is about 100m to 200m for power lines and 2,000m for coaxial cables. In addition, when an automatic relay function called multi-hop (ITU-T G.9905) is utilized, a maximum of 10 stages of relay is possible. With a maximum physical speed of 1 Gbps and effective speeds ranging from several Mbps to several tens of Mbps, this technology is used to reduce network construction costs by utilizing existing lines, to increase the speed of low-speed wired communication lines, to supplement wireless communication where it cannot reach, and to reduce the number of lines in equipment. Wireless communication Short range wireless communication called Nessum AIR. It uses magnetic field communication in the short range, and the communication distance can be controlled in the range of a few centimeters to 100 centimeters. Maximum physical speed is 1 Gbps, with an effective speed of 100 Mbps. Technical overview Physical layer (PHY) The physical layer uses Wavelet OFDM (Wavelet Orthogonal Frequency Division Multiplexing). While a guard interval is required in ordinary OFDM systems, the Wavelet OFDM system eliminates the guard interval and increases the occupancy rate of the data portion, thereby achieving high efficiency. In addition, due to the bandwidth limitation of each subcarrier, the level of sidelobes is set low, which facilitates the formation of spectral notches. This minimizes interference with existing systems and allows for flexible compliance with frequency utilization regulations. Furthermore, Pulse-" https://en.wikipedia.org/wiki/Instant%20rice,"Instant rice is a white rice that is partly precooked and then is dehydrated and packed in a dried form similar in appearance to that of regular white rice. That process allows the product to be later cooked as if it were normal rice but with a typical cooking time of 5 minutes, not the 20–30 minutes needed by white rice (or the still greater time required by brown rice). This process was invented by Ataullah K. Ozai‐Durrani in 1939 and mass-marketed by General Foods starting in 1946 as Minute Rice, which is still made. Instant rice is not the ""microwave-ready"" rice that is pre-cooked but not dehydrated; such rice is fully cooked and ready to eat, normally after cooking in its sealed package in a microwave oven for as little as 1 minute for a portion. Another distinct product is parboiled rice (also called ""converted"" rice, a trademark for what was long sold as Uncle Ben's converted rice); brown rice is parboiled to preserve nutrients that are lost in the preparation of white rice, not to reduce cooking time. Preparation process Instant rice is made using several methods. The most common method is similar to the home cooking process. The rice is blanched in hot water, steamed, and rinsed. It is then placed in large ovens for dehydration until the moisture content reaches approximately twelve percent or less. The basic principle involves using hot water or steam to form cracks or holes in the kernels before dehydrating. In the subsequent cooking, water can more easily penetrate into the cracked grain, allowing for a short cooking time. Advantages and disadvantages The notable advantage of instant rice is the rapid cooking time: some brands can be ready in as little as three minutes. Currently, several companies, Asian as well as American, have developed brands which only require 90 seconds to cook, much like a cup of instant noodles. However, instant rice is more expensive than regular white rice due to the cost of the processing. The ""cracking"" process can" https://en.wikipedia.org/wiki/List%20of%20first-order%20theories,"In first-order logic, a first-order theory is given by a set of axioms in some language. This entry lists some of the more common examples used in model theory and some of their properties. Preliminaries For every natural mathematical structure there is a signature σ listing the constants, functions, and relations of the theory together with their arities, so that the object is naturally a σ-structure. Given a signature σ there is a unique first-order language Lσ that can be used to capture the first-order expressible facts about the σ-structure. There are two common ways to specify theories: List or describe a set of sentences in the language Lσ, called the axioms of the theory. Give a set of σ-structures, and define a theory to be the set of sentences in Lσ holding in all these models. For example, the ""theory of finite fields"" consists of all sentences in the language of fields that are true in all finite fields. An Lσ theory may: be consistent: no proof of contradiction exists; be satisfiable: there exists a σ-structure for which the sentences of the theory are all true (by the completeness theorem, satisfiability is equivalent to consistency); be complete: for any statement, either it or its negation is provable; have quantifier elimination; eliminate imaginaries; be finitely axiomatizable; be decidable: There is an algorithm to decide which statements are provable; be recursively axiomatizable; be model complete or sub-model complete; be κ-categorical: All models of cardinality κ are isomorphic; be stable or unstable; be ω-stable (same as totally transcendental for countable theories); be superstable have an atomic model; have a prime model; have a saturated model. Pure identity theories The signature of the pure identity theory is empty, with no functions, constants, or relations. Pure identity theory has no (non-logical) axioms. It is decidable. One of the few interesting properties that can be stated in the language of pure identity theory" https://en.wikipedia.org/wiki/Continuous%20or%20discrete%20variable,"In mathematics and statistics, a quantitative variable may be continuous or discrete if they are typically obtained by measuring or counting, respectively. If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value. In some contexts a variable can be discrete in some ranges of the number line and continuous in others. Continuous variable A continuous variable is a variable whose value is obtained by measuring, i.e., one which can take on an uncountable set of values. For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range. The reason is that any range of real numbers between and with is uncountable. Methods of calculus are often used in problems in which the variables are continuous, for example in continuous optimization problems. In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions. In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation. The instantaneous rate of change is a well-defined concept. Discrete variable In contrast, a variable is a discrete variable if and only if there exists a one-to-one correspondence between this variable and , the set of natural numbers. In other words; a discrete variable over a particular interval of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The number of permitted values is either finite or countably infinite." https://en.wikipedia.org/wiki/Spectrum%20analyzer,"A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements. By analyzing the spectra of electrical signals, dominant frequency, power, distortion, harmonics, bandwidth, and other spectral components of a signal can be observed that are not easily detectable in time domain waveforms. These parameters are useful in the characterization of electronic devices, such as wireless transmitters. The display of a spectrum analyzer has frequency displayed on the horizontal axis and the amplitude on the vertical axis. To the casual observer, a spectrum analyzer looks like an oscilloscope, which plots amplitude on the vertical axis but time on the horizontal axis. In fact, some lab instruments can function either as an oscilloscope or a spectrum analyzer. History The first spectrum analyzers, in the 1960s, were swept-tuned instruments. Following the discovery of the fast Fourier transform (FFT) in 1965, the first FFT-based analyzers were introduced in 1967. Today, there are three basic types of analyzer: the swept-tuned spectrum analyzer, the vector signal analyzer, and the real-time spectrum analyzer. Types Spectrum analyzer types are distinguished by the methods used to obtain the spectrum of a signal. There are swept-tuned and fast Fourier transform (FFT) based spectrum analyzers: A swept-tuned analyzer uses a superheterodyne receiver to down-convert a portion of the input signal spectrum to the ce" https://en.wikipedia.org/wiki/Blue%20whale,"The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known ever to have existed. The blue whale's long and slender body can be of various shades of greyish-blue dorsally and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, B. m. indica in the Northern Indian Ocean. There is also a population in the waters off Chile that may constitute a fifth subspecies. In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age/sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother-calf bonds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz and the production of vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators. The blue whale was once abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. It continues to face numerous man-made threats such as ship strikes, pollution, ocean noise and climate change. Taxonomy Nomenclature The genus name, Balaenoptera, means winged whale while the species name, musculus, could mean ""muscle"" or a diminutive form of ""mouse"", possibly a pun by Carl Linnaeus when he named the species in Systema N" https://en.wikipedia.org/wiki/Interconnect%20%28integrated%20circuits%29,"In integrated circuits (ICs), interconnects are structures that connect two or more circuit elements (such as transistors) together electrically. The design and layout of interconnects on an IC is vital to its proper function, performance, power efficiency, reliability, and fabrication yield. The material interconnects are made from depends on many factors. Chemical and mechanical compatibility with the semiconductor substrate and the dielectric between the levels of interconnect is necessary, otherwise barrier layers are needed. Suitability for fabrication is also required; some chemistries and processes prevent the integration of materials and unit processes into a larger technology (recipe) for IC fabrication. In fabrication, interconnects are formed during the back-end-of-line after the fabrication of the transistors on the substrate. Interconnects are classified as local or global interconnects depending on the signal propagation distance it is able to support. The width and thickness of the interconnect, as well as the material from which it is made, are some of the significant factors that determine the distance a signal may propagate. Local interconnects connect circuit elements that are very close together, such as transistors separated by ten or so other contiguously laid out transistors. Global interconnects can transmit further, such as over large-area sub-circuits. Consequently, local interconnects may be formed from materials with relatively high electrical resistivity such as polycrystalline silicon (sometimes silicided to extend its range) or tungsten. To extend the distance an interconnect may reach, various circuits such as buffers or restorers may be inserted at various points along a long interconnect. Interconnect properties The geometric properties of an interconnect are width, thickness, spacing (the distance between an interconnect and another on the same level), pitch (the sum of the width and spacing), and aspect ratio, or AR, (the thickn" https://en.wikipedia.org/wiki/Load-balanced%20switch,"A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration. Introduction Internet routers are typically built using line cards connected with a switch. Routers supporting moderate total bandwidth may use a bus as their switch, but high bandwidth routers typically use some sort of crossbar interconnection. In a crossbar, each output connects to one input, so that information can flow through every output simultaneously. Crossbars used for packet switching are typically reconfigured tens of millions of times per second. The schedule of these configurations is determined by a central arbiter, for example a Wavefront arbiter, in response to requests by the line cards to send information to one another. Perfect arbitration would result in throughput limited only by the maximum throughput of each crossbar input or output. For example, if all traffic coming into line cards A and B is destined for line card C, then the maximum traffic that cards A and B can process together is limited by C. Perfect arbitration has been shown to require massive amounts of computation, that scales up much faster than the number of ports on the crossbar. Practical systems use imperfect arbitration heuristics (such as iSLIP) that can be computed in reasonable amounts of time. A load-balanced switch is not related to a load balancing switch, which refers to a kind of router used as a front end to a farm of web servers to spread requests to a single website across many servers. Basic architecture As shown in the figure to the right, a load-balanced switch has N input line cards, each of rate R, each connected to N buffers by a link of rate R/N. Those buffers are in turn each connected to N output line cards, each of rate R, by links of rate R/N." https://en.wikipedia.org/wiki/DECbit,"DECbit is a TCP congestion control technique implemented in routers to avoid congestion. Its utility is to predict possible congestion and prevent it. When a router wants to signal congestion to the sender, it adds a bit in the header of packets sent. When a packet arrives at the router, the router calculates the average queue length for the last (busy + idle) period plus the current busy period. (The router is busy when it is transmitting packets, and idle otherwise). When the average queue length exceeds 1, then the router sets the congestion indication bit in the packet header of arriving packets. When the destination replies, the corresponding ACK includes a set congestion bit. The sender receives the ACK and calculates how many packets it received with the congestion indication bit set to one. If less than half of the packets in the last window had the congestion indication bit set, then the window is increased linearly. Otherwise, the window is decreased exponentially. This technique dynamically manages the window to avoid congestion and increasing freight if it detects congestion and tries to balance bandwidth with respect to the delay. Note that this technique does not allow for effective use of the line, because it fails to take advantage of the available bandwidth. Besides, the fact that the tail has increased in size from one cycle to another does not always mean there is congestion." https://en.wikipedia.org/wiki/Number%20theoretic%20Hilbert%20transform,"The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo a prime . The transformation operator is a circulant matrix. The number theoretic transform is meaningful in the ring , when the modulus is not prime, provided a principal root of order n exists. The NHT matrix, where , has the form The rows are the cyclic permutations of the first row, or the columns may be seen as the cyclic permutations of the first column. The NHT is its own inverse: where I is the identity matrix. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences that have applications in signal processing, wireless systems, and cryptography. Other ways to generate constrained orthogonal sequences also exist." https://en.wikipedia.org/wiki/Tyranny%20of%20numbers,"The tyranny of numbers was a problem faced in the 1960s by computer engineers. Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand. In order to improve performance, more components would be needed, and it seemed that future designs would consist almost entirely of wiring. History The first known recorded use of the term in this context was made by the Vice President of Bell Labs in an article celebrating the 10th anniversary of the invention of the transistor, for the ""Proceedings of the IRE"" (Institute of Radio Engineers), June 1958 . Referring to the problems many designers were having, he wrote: At the time, computers were typically built up from a series of ""modules"", each module containing the electronics needed to perform a single function. A complex circuit like an adder would generally require several modules working in concert. The modules were typically built on printed circuit boards of a standardized size, with a connector on one edge that allowed them to be plugged into the power and signaling lines of the machine, and were then wired to other modules using twisted pair or coaxial cable. Since each module was relatively custom, modules were assembled and soldered by hand or with limited automation. As a result, they suffered major reliability problems. Even a single bad component or solder joint could render the entire module inoperative. Even with properly working modules, the mass of wiring connecting them together was another source of construction and reliability problems. As computers grew in complexity, and the number of modules increased, the complexity of making a machine actually work grew more and more difficult. This was the ""tyranny of numbers"". It was precisely this problem that Jack Kilby was thinking about while working" https://en.wikipedia.org/wiki/Null%20%28mathematics%29,"In mathematics, the word null (from meaning ""zero"", which is from meaning ""none"") is often associated with the concept of zero or the concept of nothing. It is used in varying context from ""having zero members in a set"" (e.g., null set) to ""having a value of zero"" (e.g., null vector). In a vector space, the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector, a linear mapping given as matrix product or dot product, a seminorm in a Minkowski space, etc.). In set theory, the empty set, that is, the set with zero elements, denoted ""{}"" or ""∅"", may also be called null set. In measure theory, a null set is a (possibly nonempty) set with zero measure. A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel, is the set of vectors which map to the null vector under that mapping. In statistics, a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise. See also 0 Null sign" https://en.wikipedia.org/wiki/Virtual%20particle,"A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle. The concept of virtual particles arises in the perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. A process involving virtual particles can be described by a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines. Virtual particles do not necessarily carry the same mass as the corresponding real particle, although they always conserve energy and momentum. The closer its characteristics come to those of ordinary particles, the longer the virtual particle exists. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, forces—such as the electromagnetic repulsion or attraction between two charges—can be thought of as due to the exchange of virtual photons between the charges. Virtual photons are the exchange particle for the electromagnetic interaction. The term is somewhat loose and vaguely defined, in that it refers to the view that the world is made up of ""real particles"". ""Real particles"" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are ""temporary"" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix. The accuracy and use of virtual particles in calculations is firmly established, but as they cannot be detected in experiments, deciding how to precisely describe them is a topic of debate. Although widely used, they are by no means a necessary feature of QFT, but rather are mathematical conveniences - as demonstrated by lattice field theory, which avoids using the concept altogether" https://en.wikipedia.org/wiki/Radio%20science%20subsystem,"A radio science subsystem (RSS) is a subsystem placed on board a spacecraft for radio science purposes. Function of the RSS The RSS uses radio signals to probe a medium such as a planetary atmosphere. The spacecraft transmits a highly stable signal to ground stations, receives such a signal from ground stations, or both. Since the transmitted signal parameters are accurately known to the receiver, any changes to these parameters are attributable to the propagation medium or to the relative motion of the spacecraft and ground station. The RSS is usually not a separate instrument; its functions are usually ""piggybacked"" on the existing telecommunications subsystem. More advanced systems use multiple antennas with orthogonal polarizations. Radio science Radio science is commonly used to determine the gravity field of a moon or planet by observing Doppler shift. This requires a highly stable oscillator on the spacecraft, or more commonly a ""2-way coherent"" transponder that phase locks the transmitted signal frequency to a rational multiple of a received uplink signal that usually also carries spacecraft commands. Another common radio science observation is performed as a spacecraft is occulted by a planetary body. As the spacecraft moves behind the planet, its radio signals cuts through successively deeper layers of the planetary atmosphere. Measurements of signal strength and polarization vs time can yield data on the composition and temperature of the atmosphere at different altitudes. It is also common to use multiple radio frequencies coherently derived from a common source to measure the dispersion of the propagation medium. This is especially useful in determining the free electron content of a planetary ionosphere. Spacecraft using RSS Cassini–Huygens Mariner 2, 4,5,6,7,9, and 10 Voyager 1 and 2 MESSENGER Venus Express Functions Determine composition of gas clouds such as atmospheres, solar coronas. Characterize gravitational fields Estimate m" https://en.wikipedia.org/wiki/MUSHRA,"MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. The MUSHRA methodology is recommended for assessing ""intermediate audio quality"". For very small audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead. The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, on the same samples, so that a paired t-test or a repeated measures analysis of variance can be used for statistical analysis. Also, the 0–100 scale used by MUSHRA makes it possible to rate very small differences. In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs. Listener behavior Both, MUSHRA and ITU BS.1116 tests call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results. It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predic" https://en.wikipedia.org/wiki/Blind%20deconvolution,"In electrical engineering and applied mathematics, blind deconvolution is deconvolution without explicit knowledge of the impulse response function used in the convolution. This is usually achieved by making appropriate assumptions of the input to estimate the impulse response by analyzing the output. Blind deconvolution is not solvable without making assumptions on input and impulse response. Most of the algorithms to solve this problem are based on assumption that both input and impulse response live in respective known subspaces. However, blind deconvolution remains a very challenging non-convex optimization problem even with this assumption. In image processing In image processing, blind deconvolution is a deconvolution technique that permits recovery of the target scene from a single or set of ""blurred"" images in the presence of a poorly determined or unknown point spread function (PSF). Regular linear and non-linear deconvolution techniques utilize a known PSF. For blind deconvolution, the PSF is estimated from the image or image set, allowing the deconvolution to be performed. Researchers have been studying blind deconvolution methods for several decades, and have approached the problem from different directions. Most of the work on blind deconvolution started in early 1970s. Blind deconvolution is used in astronomical imaging and medical imaging. Blind deconvolution can be performed iteratively, whereby each iteration improves the estimation of the PSF and the scene, or non-iteratively, where one application of the algorithm, based on exterior information, extracts the PSF. Iterative methods include maximum a posteriori estimation and expectation-maximization algorithms. A good estimate of the PSF is helpful for quicker convergence but not necessary. Examples of non-iterative techniques include SeDDaRA, the cepstrum transform and APEX. The cepstrum transform and APEX methods assume that the PSF has a specific shape, and one must estimate the width of t" https://en.wikipedia.org/wiki/Chv%C3%A1tal%E2%80%93Sankoff%20constants,"In mathematics, the Chvátal–Sankoff constants are mathematical constants that describe the lengths of longest common subsequences of random strings. Although the existence of these constants has been proven, their exact values are unknown. They are named after Václav Chvátal and David Sankoff, who began investigating them in the mid-1970s. There is one Chvátal–Sankoff constant for each positive integer k, where k is the number of characters in the alphabet from which the random strings are drawn. The sequence of these numbers grows inversely proportionally to the square root of k. However, some authors write ""the Chvátal–Sankoff constant"" to refer to , the constant defined in this way for the binary alphabet. Background A common subsequence of two strings S and T is a string whose characters appear in the same order (not necessarily consecutively) both in S and in T. The problem of computing a longest common subsequence has been well studied in computer science. It can be solved in polynomial time by dynamic programming; this basic algorithm has additional speedups for small alphabets (the Method of Four Russians), for strings with few differences, for strings with few matching pairs of characters, etc. This problem and its generalizations to more complex forms of edit distance have important applications in areas that include bioinformatics (in the comparison of DNA and protein sequences and the reconstruction of evolutionary trees), geology (in stratigraphy), and computer science (in data comparison and revision control). One motivation for studying the longest common subsequences of random strings, given already by Chvátal and Sankoff, is to calibrate the computations of longest common subsequences on strings that are not random. If such a computation returns a subsequence that is significantly longer than what would be obtained at random, one might infer from this result that the match is meaningful or significant. Definition and existence The Chvátal–Sanko" https://en.wikipedia.org/wiki/Miraculin,"Miraculin is a taste modifier, a glycoprotein extracted from the fruit of Synsepalum dulcificum. The berry, also known as the miracle fruit, was documented by explorer Chevalier des Marchais, who searched for many different fruits during a 1725 excursion to its native West Africa. Miraculin itself does not taste sweet. When taste buds are exposed to miraculin, the protein binds to the sweetness receptors. This causes normally sour-tasting acidic foods, such as citrus, to be perceived as sweet. The effect can last for one or two hours. History The sweetening properties of Synsepalum dulcificum berries were first noted by des Marchais during expeditions to West Africa in the 18th century. The term miraculin derived from experiments to isolate and purify the active glycoprotein that gave the berries their sweetening effects, results that were published simultaneously by Japanese and Dutch scientists working independently in the 1960s (the Dutch team called the glycoprotein mieraculin). The word miraculin was in common use by the mid-1970s. Glycoprotein structure Miraculin was first sequenced in 1989 and was found to be a 24.6 kilodalton glycoprotein consisting of 191 amino acids and 13.9% by weight of various sugars. The sugars consist of a total of 3.4 kDa, composed of a molar ratio of glucosamine (31%), mannose (30%), fucose (22%), xylose (10%), and galactose (7%). The native state of miraculin is a tetramer consisting of two dimers, each held together by a disulfide bridge. Both tetramer miraculin and native dimer miraculin in its crude state have the taste-modifying activity of turning sour tastes into sweet tastes. Miraculin belongs to the Kunitz STI protease inhibitor family. Sweetness properties Miraculin, unlike curculin (another taste-modifying agent), is not sweet by itself, but it can change the perception of sourness to sweetness, even for a long period after consumption. The duration and intensity of the sweetness-modifying effect depends on vari" https://en.wikipedia.org/wiki/Biological%20rhythm,"Biological rhythms are repetitive biological processes. Some types of biological rhythms have been described as biological clocks. They can range in frequency from microseconds to less than one repetitive event per decade. Biological rhythms are studied by chronobiology. In the biochemical context biological rhythms are called biochemical oscillations. The variations of the timing and duration of biological activity in living organisms occur for many essential biological processes. These occur (a) in animals (eating, sleeping, mating, hibernating, migration, cellular regeneration, etc.), (b) in plants (leaf movements, photosynthetic reactions, etc.), and in microbial organisms such as fungi and protozoa. They have even been found in bacteria, especially among the cyanobacteria (aka blue-green algae, see bacterial circadian rhythms). Circadian rhythm The best studied rhythm in chronobiology is the circadian rhythm, a roughly 24-hour cycle shown by physiological processes in all these organisms. The term circadian comes from the Latin circa, meaning ""around"" and dies, ""day"", meaning ""approximately a day."" It is regulated by circadian clocks. The circadian rhythm can further be broken down into routine cycles during the 24-hour day: Diurnal, which describes organisms active during daytime Nocturnal, which describes organisms active in the night Crepuscular, which describes animals primarily active during the dawn and dusk hours (ex: white-tailed deer, some bats) While circadian rhythms are defined as regulated by endogenous processes, other biological cycles may be regulated by exogenous signals. In some cases, multi-trophic systems may exhibit rhythms driven by the circadian clock of one of the members (which may also be influenced or reset by external factors). The endogenous plant cycles may regulate the activity of the bacterium by controlling availability of plant-produced photosynthate. Other cycles Many other important cycles are also studied, includin" https://en.wikipedia.org/wiki/List%20of%20algebraic%20constructions,"An algebraic construction is a method by which an algebraic entity is defined or derived from another. Instances include: Cayley–Dickson construction Proj construction Grothendieck group Gelfand–Naimark–Segal construction Ultraproduct ADHM construction Burnside ring Simplicial set Fox derivative Mapping cone (homological algebra) Prym variety Todd class Adjunction (field theory) Vaughan Jones construction Strähle construction Coset construction Plus construction Algebraic K-theory Gelfand–Naimark–Segal construction Stanley–Reisner ring construction Quotient ring construction Ward's twistor construction Hilbert symbol Hilbert's arithmetic of ends Colombeau's construction Vector bundle Integral monoid ring construction Integral group ring construction Category of Eilenberg–Moore algebras Kleisli category Adjunction (field theory) Lindenbaum–Tarski algebra construction Freudenthal magic square Stone–Čech compactification Mathematics-related lists Algebra" https://en.wikipedia.org/wiki/Lists%20of%20mathematics%20topics,"Lists of mathematics topics cover a variety of topics related to mathematics. Some of these lists link to hundreds of articles; some link only to a few. The template to the right includes links to alphabetical lists of all mathematical articles. This article brings together the same content organized in a manner better suited for browsing. Lists cover aspects of basic and advanced mathematics, methodology, mathematical statements, integrals, general concepts, mathematical objects, and reference tables. They also cover equations named after people, societies, mathematicians, journals, and meta-lists. The purpose of this list is not similar to that of the Mathematics Subject Classification formulated by the American Mathematical Society. Many mathematics journals ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The subject codes so listed are used by the two major reviewing databases, Mathematical Reviews and Zentralblatt MATH. This list has some items that would not fit in such a classification, such as list of exponential topics and list of factorial and binomial topics, which may surprise the reader with the diversity of their coverage. Basic mathematics This branch is typically taught in secondary education or in the first year of university. Outline of arithmetic Outline of discrete mathematics List of calculus topics List of geometry topics Outline of geometry List of trigonometry topics Outline of trigonometry List of trigonometric identities List of logarithmic identities List of integrals of logarithmic functions List of set identities and relations List of topics in logic Areas of advanced mathematics As a rough guide, this list is divided into pure and applied sections although in reality, these branches are overlapping and intertwined. Pure mathematics Algebra Algebra includes the study of algebraic structures, which are sets and operations defined o" https://en.wikipedia.org/wiki/WinGate,"WinGate is an integrated multi-protocol proxy server, email server and internet gateway from Qbik New Zealand Limited in Auckland. It was first released in October 1995, and began as a re-write of SocketSet, a product that had been previously released in prototype form by Adrien de Croy. WinGate proved popular, and by the mid- to late 1990s, WinGate was used in homes and small businesses that needed to share a single Internet connection between multiple networked computers. The introduction of Internet Connection Sharing in Windows 98, combined with increasing availability of cheap NAT-enabled routers, forced WinGate to evolve to provide more than just internet connection sharing features. Today, focus for WinGate is primarily access control, email server, caching, reporting, bandwidth management and content filtering. WinGate comes in three versions, Standard, Professional and Enterprise. The Enterprise edition also provides an easily configured virtual private network system, which is also available separately as WinGate VPN. Licensing is based on the number of concurrently connected users, and a range of license sizes are available. Multiple licenses can also be aggregated. The current version of WinGate is version 9.4.5, released in October 2022. Notoriety Versions of WinGate prior to 2.1d (1997) shipped with an insecure default configuration that - if not secured by the network administrator - allowed untrusted third parties to proxy network traffic through the WinGate server. This made open WinGate servers common targets of crackers looking for anonymous redirectors through which to attack other systems. While WinGate was by no means the only exploited proxy server, its wide popularity amongst users with little experience administering networks made it almost synonymous with open SOCKS proxies in the late 1990s. Furthermore, since a restricted (two users) version of the product was freely available without registration, contacting all WinGate users t" https://en.wikipedia.org/wiki/Sierpi%C5%84ski%27s%20constant,"Sierpiński's constant is a mathematical constant usually denoted as K. One way of defining it is as the following limit: where r2(k) is a number of representations of k as a sum of the form a2 + b2 for integer a and b. It can be given in closed form as: where is Gauss's constant and is the Euler-Mascheroni constant. Another way to define/understand Sierpiński's constant is, Let r(n) denote the number of representations of  by  squares, then the Summatory Function of has the Asymptotic expansion , where  is the Sierpinski constant. The above plot shows , with the value of  indicated as the solid horizontal line. See also Wacław Sierpiński External links http://www.plouffe.fr/simon/constants/sierpinski.txt - Sierpiński's constant up to 2000th decimal digit. https://archive.lib.msu.edu/crcmath/math/math/s/s276.htm Mathematical constants" https://en.wikipedia.org/wiki/List%20of%20mathematical%20abbreviations,"This following list features abbreviated names of mathematical functions, function-like operators and other mathematical terminology. This list is limited to abbreviations of two or more letters (excluding number sets). The capitalization of some of these abbreviations is not standardized – different authors might use different capitalizations. A – adele ring or algebraic numbers. AC – Axiom of Choice, or set of absolutely continuous functions. a.c. – absolutely continuous. acrd – inverse chord function. ad – adjoint representation (or adjoint action) of a Lie group. adj – adjugate of a matrix. a.e. – almost everywhere. Ai – Airy function. AL – Action limit. Alt – alternating group (Alt(n) is also written as An.) A.M. – arithmetic mean. arccos – inverse cosine function. arccosec – inverse cosecant function. (Also written as arccsc.) arccot – inverse cotangent function. arccsc – inverse cosecant function. (Also written as arccosec.) arcexc – inverse excosecant function. (Also written as arcexcsc, arcexcosec.) arcexcosec – inverse excosecant function. (Also written as arcexcsc, arcexc.) arcexcsc – inverse excosecant function. (Also written as arcexcosec, arcexc.) arcexs – inverse exsecant function. (Also written as arcexsec.) arcexsec – inverse exsecant function. (Also written as arcexs.) arcosech – inverse hyperbolic cosecant function. (Also written as arcsch.) arcosh – inverse hyperbolic cosine function. arcoth – inverse hyperbolic cotangent function. arcsch – inverse hyperbolic cosecant function. (Also written as arcosech.) arcsec – inverse secant function. arcsin – inverse sine function. arctan – inverse tangent function. arctan2 – inverse tangent function with two arguments. (Also written as atan2.) arg – argument of. arg max – argument of the maximum. arg min – argument of the minimum. arsech – inverse hyperbolic secant function. arsinh – inverse hyperbolic sine function. artanh – inverse hyperbolic tangent function. a.s. – a" https://en.wikipedia.org/wiki/Relict,"A relict is a surviving remnant of a natural phenomenon. Biology A relict (or relic) is an organism that at an earlier time was abundant in a large area but now occurs at only one or a few small areas. Geology and geomorphology In geology, a relict is a structure or mineral from a parent rock that did not undergo metamorphosis when the surrounding rock did, or a rock that survived a destructive geologic process. In geomorphology, a relict landform is a landform formed by either erosive or constructive surficial processes that are no longer active as they were in the past. A glacial relict is a cold-adapted organism that is a remnant of a larger distribution that existed in the ice ages. Human populations As revealed by DNA testing, a relict population is an ancient people in an area, who have been largely supplanted by a later group of migrants and their descendants. In various places around the world, minority ethnic groups represent lineages of ancient human migrations in places now occupied by more populous ethnic groups, whose ancestors arrived later. For example, the first human groups to inhabit the Caribbean islands were hunter-gatherer tribes from South and Central America. Genetic testing of natives of Cuba show that, in late pre-Columbian times, the island was home to agriculturalists of Taino ethnicity. In addition, a relict population of the original hunter-gatherers remained in western Cuba as the Ciboney people. Other uses In ecology, an ecosystem which originally ranged over a large expanse, but is now narrowly confined, may be termed a relict. In agronomy, a relict crop is a crop which was previously grown extensively, but is now only used in one limited region, or a small number of isolated regions. In real estate law, reliction is the gradual recession of water from its usual high-water mark so that the newly uncovered land becomes the property of the adjoining riparian property owner. ""Relict"" was an ancient term still used in coloni" https://en.wikipedia.org/wiki/PRODIGAL,"PRODIGAL (proactive discovery of insider threats using graph analysis and learning) is a computer system for predicting anomalous behavior among humans, by data mining network traffic such as emails, text messages and server log entries. It is part of DARPA's Anomaly Detection at Multiple Scales (ADAMS) project. The initial schedule is for two years and the budget $9 million. It uses graph theory, machine learning, statistical anomaly detection, and high-performance computing to scan larger sets of data more quickly than in past systems. The amount of data analyzed is in the range of terabytes per day. The targets of the analysis are employees within the government or defense contracting organizations; specific examples of behavior the system is intended to detect include the actions of Nidal Malik Hasan and WikiLeaks source Chelsea Manning. Commercial applications may include finance. The results of the analysis, the five most serious threats per day, go to agents, analysts, and operators working in counterintelligence. Primary participants Georgia Institute of Technology College of Computing Georgia Tech Research Institute Defense Advanced Research Projects Agency Army Research Office Science Applications International Corporation Oregon State University University of Massachusetts Amherst Carnegie Mellon University See also Cyber Insider Threat Einstein (US-CERT program) Threat (computer) Intrusion detection ECHELON, Thinthread, Trailblazer, Turbulence (NSA programs) Fusion center, Investigative Data Warehouse (FBI)" https://en.wikipedia.org/wiki/Zipf%27s%20law,"Zipf's law (, ) is an empirical law that often holds, approximately, when a list of measured values is sorted in decreasing order. It states that the value of the nth entry is inversely proportional to n. The best known instance of Zipf's law applies to the frequency table of words in a text or corpus of natural language: It is usually found that the most common word occurs approximately twice as often as the next common one, three times as often as the third most common, and so on. For example, in the Brown Corpus of American English text, the word ""the"" is the most frequently occurring word, and by itself accounts for nearly 7% of all word occurrences (69,971 out of slightly over 1 million). True to Zipf's Law, the second-place word ""of"" accounts for slightly over 3.5% of words (36,411 occurrences), followed by ""and"" (28,852). It is often used in the following form, called Zipf-Mandelbrot law:where are fitted parameters, with , and . This ""law"" is named after the American linguist George Kingsley Zipf, and is still an important concept in quantitative linguistics. It has been found to apply to many other types of data studied in the physical and social sciences. In mathematical statistics, the concept has been formalized as the Zipfian distribution: a family of related discrete probability distributions whose rank-frequency distribution is an inverse power law relation. They are related to Benford's law and the Pareto distribution. Some sets of time-dependent empirical data deviate somewhat from Zipf's law. Such empirical distributions are said to be quasi-Zipfian. History In 1913, the German physicist Felix Auerbach observed an inverse proportionality between the population sizes of cities, and their ranks when sorted by decreasing order of that variable. Zipf's law has been discovered before Zipf, by the French stenographer Jean-Baptiste Estoup' Gammes Stenographiques (4th ed) in 1916, with G. Dewey in 1923, and with E. Condon in 1928. The same relat" https://en.wikipedia.org/wiki/Macrocell%20array," Macrocell arrays in PLDs Programmable logic devices, such as programmable array logic and complex programmable logic devices, typically have a macrocell on every output pin. Macrocell arrays in ASICs A macrocell array is an approach to the design and manufacture of ASICs. Essentially, it is a small step up from the otherwise similar gate array, but rather than being a prefabricated array of simple logic gates, the macrocell array is a prefabricated array of higher-level logic functions such as flip-flops, ALU functions, registers, and the like. These logic functions are simply placed at regular predefined positions and manufactured on a wafer, usually called master slice. Creation of a circuit with a specified function is accomplished by adding metal interconnects to the chips on the master slice late in the manufacturing process, allowing the function of the chip to be customised as desired. Macrocell array master slices are usually prefabricated and stockpiled in large quantities regardless of customer orders. The fabrication according to the individual customer specifications may be finished in a shorter time compared with standard cell or full custom design. The macrocell array approach reduces the mask costs since fewer custom masks need to be produced. In addition manufacturing test tooling lead time and costs are reduced since the same test fixtures may be used for all macrocell array products manufactured on the same die size. Drawbacks are somewhat low density and performance than other approaches to ASIC design. However this style is often a viable approach for low production volumes. A standard cell library is sometimes called a ""macrocell library""." https://en.wikipedia.org/wiki/Antibiosis,"Antibiosis is a biological interaction between two or more organisms that is detrimental to at least one of them; it can also be an antagonistic association between an organism and the metabolic substances produced by another. Examples of antibiosis include the relationship between antibiotics and bacteria or animals and disease-causing pathogens. The study of antibiosis and its role in antibiotics has led to the expansion of knowledge in the field of microbiology. Molecular processes such cell wall synthesis and recycling, for example, have become better understood through the study of how antibiotics affect beta-lactam development through the antibiosis relationship and interaction of the particular drugs with the bacteria subjected to the compound. Antibiosis is typically studied in host plant populations and extends to the insects which feed upon them. ""Antibiosis resistance affects the biology of the insect so pest abundance and subsequent damage is reduced compared to that which would have occurred if the insect was on a susceptible crop variety. Antibiosis resistance often results in increased mortality or reduced longevity and reproduction of the insect."" During a study of antibiosis, it was determine that the means to achieving effective antibiosis is remaining still. ""When you give antibiotic-producing bacteria a structured medium, they affix to substrate, grow clonally, and produce a “no mans land,” absent competitors, where the antibiotics diffuse outward."" Antibiosis is most effective when resources are neither plentiful nor sparse. Antibiosis should be considered as the median on the scale of resource, due to its ideal performance. See also Antibiotic Biological pest control Biotechnology Symbiosis" https://en.wikipedia.org/wiki/System%20on%20a%20chip,"A system on a chip or system-on-chip (SoC ; pl. SoCs ) is an integrated circuit that integrates most or all components of a computer or other electronic system. These components almost always include on-chip central processing unit (CPU), memory interfaces, input/output devices and interfaces, and secondary storage interfaces, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. SoCs may contain digital and also analog, mixed-signal and often radio frequency signal processing functions (otherwise it may be considered on a discrete application processor). Higher-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (such as LPDDR and eUFS or eMMC, respectively) chips, that may be layered on top of the SoC in what's known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems. SoCs are in contrast to the common traditional PC architecture, which separates hardware components based on function and connects them through a central interfacing circuit board called the motherboard. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integral circuit. An SoC will typically integrate a CPU, graphics and memory interfaces, secondary storage and USB connectivity, I/O interfaces on a single chip, whereas a motherboard would connect these modules as discrete components or expansion cards. An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems, and/or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals. Compared to a multi-chip architecture, " https://en.wikipedia.org/wiki/AP%20Biology,"Advanced Placement (AP) Biology (also known as AP Bio) is an Advanced Placement biology course and exam offered by the College Board in the United States. For the 2012–2013 school year, the College Board unveiled a new curriculum with a greater focus on ""scientific practices"". This course is designed for students who wish to pursue an interest in the life sciences. The College Board recommends successful completion of high school biology and high school chemistry before commencing AP Biology, although the actual prerequisites vary from school to school and from state to state. This course, nevertheless, is considered very challenging and one of the most difficult AP classes, as shown with AP Finals grade distributions. Topic outline The exam covers the following 8 units. The percentage indicates the portion of the multiple-choice section of the exam focused on each content area: The course is based on and tests six skills, called scientific practices which include: In addition to the topics above, students are required to be familiar with general lab procedure. Students should know how to collect data, analyze data to form conclusions, and apply those conclusions. Exam Students are allowed to use a four-function, scientific, or graphing calculator. The exam has two sections: a 90 minute multiple choice section and a 90 minute free response section. There are 60 multiple choice questions and six free responses, two long and four short. Both sections are worth 50% of the score. Score distribution Commonly used textbooks Biology, AP Edition by Sylvia Mader (2012, hardcover ) Life: The Science of Biology (Sadava, Heller, Orians, Purves, and Hillis, ) Campbell Biology AP Ninth Edition (Reece, Urry, Cain, Wasserman, Minorsky, and Andrew Jackson ) See also Glossary of biology A.P Bio (TV Show)" https://en.wikipedia.org/wiki/British%20Post%20Office%20scandal,"The British Post Office scandal is a miscarriage of justice involving the wrongful civil and criminal prosecutions of an unknown or unpublished number of sub-postmasters (SPMs) for theft, false accounting and/or fraud. The cases constitute the most widespread miscarriage of justice in British legal history, spanning a period of over twenty years; aspects of the scandal remain unresolved. A group action (which is the English equivalent of a class action) was brought by SPMs in London's High Court. 555 SPMs successfully sued the Post Office. This resulted in two important judgments in favour of the SPMs. Since then the convictions have been overturned, compensation for other SPMs who did not take part in the case and what has now become a public inquiry into what happened, which is still ongoing. The first important judgment in the group action was handed down on 15 March 2019 and was about the contracts between the Post Office and SPMs and whether the Post Office could carry on making SPMs liable for gaps in the accounts on the Post Office's Horizon software system, even when the cause of those gaps or shortfalls was not known. The Judge, Mr Justice Fraser, gave a comprehensive and detailed judgment (running to 1122 paragraphs) in which he analysed the legal relationship. The Judge found overwhelmingly in favour of the SPMs. As well as a victory for the SPMs, it is also considered an important case for lawyers because the Judge held that the Post Office owed SPMs an implied duty of ""good faith"" and ""fair dealing"". The second important judgment was about whether the Horizon computer system worked and was ""robust"" (which the Post Office claimed it was). Again, the Judge found overwhelmingly in favour of the SPMs and that the original version of Horizon was ""not robust"" and, as to the later version, ""its robustness was questionable, and did not justify the confidence placed in it by the Post Office in terms of its accuracy"" (para 936). Since then, those conv" https://en.wikipedia.org/wiki/List%20of%20numeral%20system%20topics,"This is a list of Wikipedia articles on topics of numeral system and ""numeric representations"" See also: computer numbering formats and number names. Arranged by base Radix, radix point, mixed radix, base (mathematics) Unary numeral system (base 1) Binary numeral system (base 2) Negative base numeral system (base −2) Ternary numeral system numeral system (base 3) Balanced ternary numeral system (base 3) Negative base numeral system (base −3) Quaternary numeral system (base 4) Quater-imaginary base (base 2) Quinary numeral system (base 5) Senary numeral system (base 6) Septenary numeral system (base 7) Octal numeral system (base 8) Nonary (novenary) numeral system (base 9) Decimal (denary) numeral system (base 10) Negative base numeral system (base −10) Duodecimal (dozenal) numeral system (base 12) Hexadecimal numeral system (base 16) Vigesimal numeral system (base 20) Sexagesimal numeral system (base 60) Arranged by culture Other Numeral system topics" https://en.wikipedia.org/wiki/Nomogram,"A nomogram (from Greek , ""law"" and , ""line""), also called a nomograph, alignment chart, or abac, is a graphical calculating device, a two-dimensional diagram designed to allow the approximate graphical computation of a mathematical function. The field of nomography was invented in 1884 by the French engineer Philbert Maurice d'Ocagne (1862–1938) and used extensively for many years to provide engineers with fast graphical calculations of complicated formulas to a practical precision. Nomograms use a parallel coordinate system invented by d'Ocagne rather than standard Cartesian coordinates. A nomogram consists of a set of n scales, one for each variable in an equation. Knowing the values of n-1 variables, the value of the unknown variable can be found, or by fixing the values of some variables, the relationship between the unfixed ones can be studied. The result is obtained by laying a straightedge across the known values on the scales and reading the unknown value from where it crosses the scale for that variable. The virtual or drawn line created by the straightedge is called an index line or isopleth. Nomograms flourished in many different contexts for roughly 75 years because they allowed quick and accurate computations before the age of pocket calculators. Results from a nomogram are obtained very quickly and reliably by simply drawing one or more lines. The user does not have to know how to solve algebraic equations, look up data in tables, use a slide rule, or substitute numbers into equations to obtain results. The user does not even need to know the underlying equation the nomogram represents. In addition, nomograms naturally incorporate implicit or explicit domain knowledge into their design. For example, to create larger nomograms for greater accuracy the nomographer usually includes only scale ranges that are reasonable and of interest to the problem. Many nomograms include other useful markings such as reference labels and colored regions. All of thes" https://en.wikipedia.org/wiki/Subphylum,"In zoological nomenclature, a subphylum is a taxonomic rank below the rank of phylum. The taxonomic rank of ""subdivision"" in fungi and plant taxonomy is equivalent to ""subphylum"" in zoological taxonomy. Some plant taxonomists have also used the rank of subphylum, for instance monocotyledons as a subphylum of phylum Angiospermae and vertebrates as a subphylum of phylum Chordata. Taxonomic rank Subphylum is: subordinate to the phylum superordinate to the infraphylum. Where convenient, subphyla in turn can be divided into infraphyla; in turn such an infraphylum also would be superordinate to any classes or superclasses in the hierarchy. Examples Not all fauna phyla are divided into subphyla. Those that are include: Arthropoda: divided into subphyla Trilobitomorpha, Chelicerata, Myriapoda, Hexapoda and Crustacea, Brachiopoda: divided into subphyla Linguliformea, Craniformea and Rhynchonelliformea, Chordata: divided into Tunicata, Cephalochordata, and its largest subphylum Vertebrata. Examples of infraphyla include the Mycetozoa, the Gnathostomata and the Agnatha." https://en.wikipedia.org/wiki/List%20of%20number%20theory%20topics,"This is a list of number theory topics. See also: List of recreational number theory topics Topics in cryptography Divisibility Composite number Highly composite number Even and odd numbers Parity Divisor, aliquot part Greatest common divisor Least common multiple Euclidean algorithm Coprime Euclid's lemma Bézout's identity, Bézout's lemma Extended Euclidean algorithm Table of divisors Prime number, prime power Bonse's inequality Prime factor Table of prime factors Formula for primes Factorization RSA number Fundamental theorem of arithmetic Square-free Square-free integer Square-free polynomial Square number Power of two Integer-valued polynomial Fractions Rational number Unit fraction Irreducible fraction = in lowest terms Dyadic fraction Recurring decimal Cyclic number Farey sequence Ford circle Stern–Brocot tree Dedekind sum Egyptian fraction Modular arithmetic Montgomery reduction Modular exponentiation Linear congruence theorem Method of successive substitution Chinese remainder theorem Fermat's little theorem Proofs of Fermat's little theorem Fermat quotient Euler's totient function Noncototient Nontotient Euler's theorem Wilson's theorem Primitive root modulo n Multiplicative order Discrete logarithm Quadratic residue Euler's criterion Legendre symbol Gauss's lemma (number theory) Congruence of squares Luhn formula Mod n cryptanalysis Arithmetic functions Multiplicative function Additive function Dirichlet convolution Erdős–Kac theorem Möbius function Möbius inversion formula Divisor function Liouville function Partition function (number theory) Integer partition Bell numbers Landau's function Pentagonal number theorem Bell series Lambert series Analytic number theory: additive problems Twin prime Brun's constant Cousin prime Prime triplet Prime quadruplet Sexy prime Sophie Germain prime Cunningham chain Goldbach's conjecture Goldbach's weak conjecture Second Hardy–Littlewood conjecture Hardy–Littlewood circle method Schinzel's hypothesis H Batema" https://en.wikipedia.org/wiki/Bacteria,"Bacteria (; : bacterium) are ubiquitous, mostly free-living organisms often consisting of one biological cell. They constitute a large domain of prokaryotic microorganisms. Typically a few micrometres in length, bacteria were among the first life forms to appear on Earth, and are present in most of its habitats. Bacteria inhabit soil, water, acidic hot springs, radioactive waste, and the deep biosphere of Earth's crust. Bacteria play a vital role in many stages of the nutrient cycle by recycling nutrients and the fixation of nitrogen from the atmosphere. The nutrient cycle includes the decomposition of dead bodies; bacteria are responsible for the putrefaction stage in this process. In the biological communities surrounding hydrothermal vents and cold seeps, extremophile bacteria provide the nutrients needed to sustain life by converting dissolved compounds, such as hydrogen sulphide and methane, to energy. Bacteria also live in symbiotic and parasitic relationships with plants and animals. Most bacteria have not been characterised and there are many species that cannot be grown in the laboratory. The study of bacteria is known as bacteriology, a branch of microbiology. Humans and most other animals carry vast numbers (approximately 1013 to 1014) of bacteria. Most are in the gut, and there are many on the skin. Most of the bacteria in and on the body are harmless or rendered so by the protective effects of the immune system, and many are beneficial, particularly the ones in the gut. However, several species of bacteria are pathogenic and cause infectious diseases, including cholera, syphilis, anthrax, leprosy, tuberculosis, tetanus and bubonic plague. The most common fatal bacterial diseases are respiratory infections. Antibiotics are used to treat bacterial infections and are also used in farming, making antibiotic resistance a growing problem. Bacteria are important in sewage treatment and the breakdown of oil spills, the production of cheese and yogurt through f" https://en.wikipedia.org/wiki/Matheass,"MatheAss (former Math-Assist) is a computer program for numerical solutions in school mathematics and functions in some points similar to Microsoft Mathematics. ""MatheAss is widely spread in math classes"" in Germany. For schools in the federal state of Hessen (Germany) exists a state license, which allows all secondary schools to use MatheAss Its functionality is limited compared to other numerical programs, for example, MatheAss has no script language and does no symbolic computation. On the other side it is easy to use and offers the user fully worked out solutions, in which only the necessary quantities need to be entered. MatheAss covers the topics algebra, geometry, analysis, stochastics, and linear algebra. After a precursor for the home computers, usual around 1980, MatheAss appeared in 1983 as a shareware version for the PC, so it was one of the first shareware programs on the German market. MatheAss is available on the manufacturer's website for download for various versions of the Windows operating system. Since version 8.2 (released in February 2011) MatheAss again offers a context-sensitive help, which was supplemented in many places by showing mathematical examples and background information. The MatheAss help file can also be viewed online." https://en.wikipedia.org/wiki/Possible%20Worlds%20%28play%29,"Possible Worlds is a play written in 1990 by John Mighton. The author, Mighton, is a mathematician and philosopher. His plays tend to meld science, drama and math into one cohesive piece. It is part murder mystery, part science-fiction, and part mathematical philosophy and follows the multiple parallel lives of the main character George Barber. Mighton, a mathematician from University of Toronto's Fields Institute, brought his considerable professional experience to bear on the writing of the play. At the play's beginning, George is found dead, with his brain missing. Two detectives set out to uncover the truth behind his grisly death, and stumble upon several strange characters. This play may be classified as a sci-fi tragic drama The play itself does not have any music. Possible Worlds won a Governor General's Literary Award for Drama in 1992 alongside Short History of Night. A film adaptation of the same name was released in 2000. Directed by Robert Lepage and starring Tom McCamus and Tilda Swinton, it garnered wide critical acclaim, won two Genie Awards, and was nominated for a further four. The theatre book was published in 1997 by Playwrights Canada Press. The play bears many conceptual similarities to Tom Stoppard's Hapgood, a play about spies and secret agents that takes place primarily in the men's changingroom of a municipal swimming baths. Production history (selected) Canadian Stage Company, Toronto, Ontario, Canada – Premiere 1990 Dionysus and Apollo Stage company, Dallas, Texas – 1997 Dr. Betty Mitchell Theatre, Calgary, Alberta, Canada –1999 Chicago Cultural Center Studio Theater, Chicago, Illinois – 1999 Company of the Silvershield, Toronto, Ontario, Canada - 2000 The group at Strasberg, Lee Strasberg Creative Center, Hollywood, California – 2001 Tron Theatre, Glasgow, Scotland – 2002 Hart House Theatre, Toronto, Canada – 2004 Sullivan Mahoney Court House Theatre, Ontario, Canada – 2008 Wakefield Players Theater Company, Wakefield, " https://en.wikipedia.org/wiki/Comb%20generator,"A comb generator is a signal generator that produces multiple harmonics of its input signal. The appearance of the output at the spectrum analyzer screen, resembling teeth of a comb, gave the device its name. Comb generators find wide range of uses in microwave technology. E.g., synchronous signals in wide frequency bandwidth can be produced by a comb generator. The most common use is in broadband frequency synthesizers, where the high frequency signals act as stable references correlated to the lower energy references; the outputs can be used directly, or to synchronize phase-locked loop oscillators. It may be also used to generate a complete set of substitution channels for testing, each of which carries the same baseband audio and video signal. Comb generators are also used in RFI testing of consumer electronics, where their output is used as a simulated RF emissions, as it is a stable broadband noise source with repeatable output. It is also used during compliance testing to various government requirements for products such as medical devices (FDA), military electronics (MIL-STD-461), commercial avionics (Federal Aviation Administration), digital electronics (Federal Communications Commission), in the USA. An optical comb generator can be used as generators of terahertz radiation. Internally, it is a resonant electro-optic modulator, with the capability of generating hundreds of sidebands with total span of at least 3 terahertz (limited by the optical dispersion of the lithium niobate crystal) and frequency spacing of 17 GHz. Other construction can be based on erbium-doped fiber laser or Ti-sapphire laser often in combination with carrier envelope offset control. See also Comb filter Frequency comb" https://en.wikipedia.org/wiki/Consolidation%20ratio,"Consolidation ratio within network infrastructure for Internet hosting, is the number of virtual servers that can run on each physical host machine. Many companies arrive at that figure through trial and error by stacking virtual machines on top of each other until performance slows to a crawl. “It’s sort of capacity planning by bloody nose,” observes Bob Gill, managing director of server research for analyst firm TheInfoPro Inc. of New York. The recent V-index showed that the average consolidation ratio is actually lower than was expected - 6.3:1 VMs per physical host (actual ratio) vs. 9.8:1 (perceived) See also Nagle's algorithm" https://en.wikipedia.org/wiki/RFIC,"RFIC is an abbreviation of radio-frequency integrated circuit. Applications for RFICs include radar and communications, although the term RFIC might be applied to any electrical integrated circuit operating in a frequency range suitable for wireless transmission. There is considerable interest in RFIC research due to the cost benefit of shifting as much of the wireless transceiver as possible to a single technology, which in turn would allow for a system on a chip solution as opposed to the more common system-on-package. This interest is bolstered by the pervasiveness of wireless capabilities in electronics. Current research focuses on integrating the RF power amplifier (PA) with CMOS technology, either by using MOSFETs or SiGe HBTs, on RF CMOS mixed-signal integrated circuit chips. RFIC-related research conferences RFIC is also used to refer to the annual RFIC Symposium, a research conference held as part of Microwave Week, which is headlined by the International Microwave Symposium. Other peer-reviewed research conferences are listed in the table below. Publications featuring RFIC research IEEE Journal of Solid-State Circuits IEEE Transactions on Microwave Theory and Techniques See also RF module Radio-frequency identification" https://en.wikipedia.org/wiki/List%20of%20properties%20of%20sets%20of%20reals,"This article lists some properties of sets of real numbers. The general study of these concepts forms descriptive set theory, which has a rather different emphasis from general topology. Definability properties Borel set Analytic set C-measurable set Projective set Inductive set Infinity-Borel set Suslin set Homogeneously Suslin set Weakly homogeneously Suslin set Set of uniqueness Regularity properties Property of Baire Lebesgue measurable Universally measurable set Perfect set property Universally Baire set Largeness and smallness properties Meager set Comeager set - A comeager set is one whose complement is meager. Null set Conull set Dense set Nowhere dense set Real numbers Real numbers" https://en.wikipedia.org/wiki/List%20of%20planar%20symmetry%20groups,"This article summarizes the classes of discrete symmetry groups of the Euclidean plane. The symmetry groups are named here by three naming schemes: International notation, orbifold notation, and Coxeter notation. There are three kinds of symmetry groups of the plane: 2 families of rosette groups – 2D point groups 7 frieze groups – 2D line groups 17 wallpaper groups – 2D space groups. Rosette groups There are two families of discrete two-dimensional point groups, and they are specified with parameter n, which is the order of the group of the rotations in the group. Frieze groups The 7 frieze groups, the two-dimensional line groups, with a direction of periodicity are given with five notational names. The Schönflies notation is given as infinite limits of 7 dihedral groups. The yellow regions represent the infinite fundamental domain in each. Wallpaper groups The 17 wallpaper groups, with finite fundamental domains, are given by International notation, orbifold notation, and Coxeter notation, classified by the 5 Bravais lattices in the plane: square, oblique (parallelogrammatic), hexagonal (equilateral triangular), rectangular (centered rhombic), and rhombic (centered rectangular). The p1 and p2 groups, with no reflectional symmetry, are repeated in all classes. The related pure reflectional Coxeter group are given with all classes except oblique. Wallpaper subgroup relationships See also List of spherical symmetry groups Orbifold notation#Hyperbolic plane - Hyperbolic symmetry groups Notes" https://en.wikipedia.org/wiki/Outline%20of%20combinatorics,"Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures. Essence of combinatorics Matroid Greedoid Ramsey theory Van der Waerden's theorem Hales–Jewett theorem Umbral calculus, binomial type polynomial sequences Combinatorial species Branches of combinatorics Algebraic combinatorics Analytic combinatorics Arithmetic combinatorics Combinatorics on words Combinatorial design theory Enumerative combinatorics Extremal combinatorics Geometric combinatorics Graph theory Infinitary combinatorics Matroid theory Order theory Partition theory Probabilistic combinatorics Topological combinatorics Multi-disciplinary fields that include combinatorics Coding theory Combinatorial optimization Combinatorics and dynamical systems Combinatorics and physics Discrete geometry Finite geometry Phylogenetics History of combinatorics History of combinatorics General combinatorial principles and methods Combinatorial principles Trial and error, brute-force search, bogosort, British Museum algorithm Pigeonhole principle Method of distinguished element Mathematical induction Recurrence relation, telescoping series Generating functions as an application of formal power series Cyclic sieving Schrödinger method Exponential generating function Stanley's reciprocity theorem Binomial coefficients and their properties Combinatorial proof Double counting (proof technique) Bijective proof Inclusion–exclusion principle Möbius inversion formula Parity, even and odd permutations Combinatorial Nullstellensatz Incidence algebra Greedy algorithm Divide and conquer algorithm Akra–Bazzi method Dynamic programming Branch and bound Birthday attack, birthday paradox Floyd's cycle-finding algorithm Reduction to linear algebra Sparsity Weight function Minimax algorithm Alpha–beta pruning Probabilistic method Sieve methods Analytic combinatorics Symbolic combinatorics Combinatorial" https://en.wikipedia.org/wiki/Microwave%20analog%20signal%20processing,"Real-time Analog Signal Processing (R-ASP), as an alternative to DSP-based processing, might be defined as the manipulation of signals in their pristine analog form and in real time to realize specific operations enabling microwave or millimeter-wave and terahertz applications. The exploding demand for higher spectral efficiency in radio has spurred a renewed interest in analog real-time components and systems beyond conventional purely digital signal processing techniques. Although they are unrivaled at low microwave frequencies, due to their high flexibility, compact size, low cost and strong reliability, digital devices suffer of major issues, such as poor performance, high cost of A/D and D/A converters and excessive power consumption, at higher microwave and millimeter-wave frequencies. At such frequencies, analog devices and related real-time or analog signal processing (ASP) systems, which manipulate broadband signals in the time domain, may be far preferable, as they offer the benefits of lower complexity and higher speed, which may offer unprecedented solutions in the major areas of radio engineering, including communications, but also radars, sensors, instrumentation and imaging. This new technology might be seen as microwave and millimeter-wave counterpart of ultra-fast optics signal processing, and has been recently enabled by a wide range of novel phasers, that are components following arbitrary group delay versus frequency responses." https://en.wikipedia.org/wiki/Software%20metering,"Software metering is the monitoring and controlling of software for analytics and the enforcement of agreements. It can be either passive, where data is simply collected and no action is taken, or active, where access is restricted for enforcement. Types Software metering refers to several areas: Tracking and maintaining software licenses. One needs to make sure that only the allowed number of licenses are in use, and at the same time, that there are enough licenses for everyone using it. This can include monitoring of concurrent usage of software for real-time enforcement of license limits. Such license monitoring usually includes when a license needs to be updated due to version changes or when upgrades or even rebates are possible. Real-time monitoring of all (or selected) applications running on the computers within the organization in order to detect unregistered or unlicensed software and prevent its execution, or limit its execution to within certain hours. The systems administrator can configure the software metering agent on each computer in the organization, for example, to prohibit the execution of games before 17:00. Fixed planning to allocate software usage to computers according to the policies a company specifies and to maintain a record of usage and attempted usage. A company can check out and check in licenses for mobile users, and can also keep a record of all licenses in use. This is often used when limited license counts are available to avoid violating strict license controls. A method of software licensing where the licensed software automatically records how many times, or for how long one or more functions in the software are used, and the user pays fees based on this actual usage (also known as 'pay-per-use')" https://en.wikipedia.org/wiki/Adequality,"Adequality is a technique developed by Pierre de Fermat in his treatise Methodus ad disquirendam maximam et minimam (a Latin treatise circulated in France c. 1636 ) to calculate maxima and minima of functions, tangents to curves, area, center of mass, least action, and other problems in calculus. According to André Weil, Fermat ""introduces the technical term adaequalitas, adaequare, etc., which he says he has borrowed from Diophantus. As Diophantus V.11 shows, it means an approximate equality, and this is indeed how Fermat explains the word in one of his later writings."" (Weil 1973). Diophantus coined the word παρισότης (parisotēs) to refer to an approximate equality. Claude Gaspard Bachet de Méziriac translated Diophantus's Greek word into Latin as adaequalitas. Paul Tannery's French translation of Fermat’s Latin treatises on maxima and minima used the words adéquation and adégaler. Fermat's method Fermat used adequality first to find maxima of functions, and then adapted it to find tangent lines to curves. To find the maximum of a term , Fermat equated (or more precisely adequated) and and after doing algebra he could cancel out a factor of and then discard any remaining terms involving To illustrate the method by Fermat's own example, consider the problem of finding the maximum of (In Fermat's words, it is to divide a line of length at a point , such that the product of the two resulting parts be a maximum.) Fermat adequated with . That is (using the notation to denote adequality, introduced by Paul Tannery): Canceling terms and dividing by Fermat arrived at Removing the terms that contained Fermat arrived at the desired result that the maximum occurred when . Fermat also used his principle to give a mathematical derivation of Snell's laws of refraction directly from the principle that light takes the quickest path. Descartes' criticism Fermat's method was highly criticized by his contemporaries, particularly Descartes. Victor Katz suggests this" https://en.wikipedia.org/wiki/Pathatrix,"Pathatrix is a high volume recirculating immuno magnetic-capture system developed by Thermo Fisher Scientific (and supplier parts by Life Technologies) for the detection of pathogens in food and environmental samples. History Pathatrix and its Pathatrix Recirculating Immunomagnetic Separation System (RIMS) was used in 2006 to detect the E. coli O157:H7 strain in contaminated spinach using a polymerase chain reaction (PCR). The Pathatrix system is used by regulatory agencies and food companies around the world as a reliable method for detecting pathogens in food. Unlike other detection methods, Pathatrix allows the entire pre-enriched sample or large pooled samples to be recirculated over antibody-coated paramagnetic beads. It can specifically isolate pathogens directly from food samples and in conjunction with quantitative PCR can provide results within hours. It is also used to improve the performance of other rapid methods such as PCR, lateral flow, ELISA and chromogenic media by reducing or eliminating the need for lengthy pre-enrichment and/or selective enrichment steps. The Pathatrix is useful in pathogen labs that would be running food samples and looking for foodborne diseases. The Pathatrix is a rapid test method and Pathatrix pooling allows the screening of large numbers of food samples in a highly cost-effective way for specific pathogens such as E. coli O157, Salmonella or Listeria monocytogenes. The Pathatrix will selectively bind and purify the target organism from a comprehensive range of complex food matrices (including raw ground beef, chocolate, peanut butter, leafy greens, spinach, tomatoes). The Pathatrix is a microbial detection system that allows for the entire sample to be analyzed." https://en.wikipedia.org/wiki/Complementary%20sequences,"For complementary sequences in biology, see complementarity (molecular biology). For integer sequences with complementary sets of members see Lambek–Moser theorem. In applied mathematics, complementary sequences (CS) are pairs of sequences with the useful property that their out-of-phase aperiodic autocorrelation coefficients sum to zero. Binary complementary sequences were first introduced by Marcel J. E. Golay in 1949. In 1961–1962 Golay gave several methods for constructing sequences of length 2N and gave examples of complementary sequences of lengths 10 and 26. In 1974 R. J. Turyn gave a method for constructing sequences of length mn from sequences of lengths m and n which allows the construction of sequences of any length of the form 2N10K26M. Later the theory of complementary sequences was generalized by other authors to polyphase complementary sequences, multilevel complementary sequences, and arbitrary complex complementary sequences. Complementary sets have also been considered; these can contain more than two sequences. Definition Let (a0, a1, ..., aN − 1) and (b0, b1, ..., bN − 1) be a pair of bipolar sequences, meaning that a(k) and b(k) have values +1 or −1. Let the aperiodic autocorrelation function of the sequence x be defined by Then the pair of sequences a and b is complementary if: for k = 0, and for k = 1, ..., N − 1. Or using Kronecker delta we can write: So we can say that the sum of autocorrelation functions of complementary sequences is a delta function, which is an ideal autocorrelation for many applications like radar pulse compression and spread spectrum telecommunications. Examples As the simplest example we have sequences of length 2: (+1, +1) and (+1, −1). Their autocorrelation functions are (2, 1) and (2, −1), which add up to (4, 0). As the next example (sequences of length 4), we have (+1, +1, +1, −1) and (+1, +1, −1, +1). Their autocorrelation functions are (4, 1, 0, −1) and (4, −1, 0, 1), which add up to (8, 0, 0, 0)." https://en.wikipedia.org/wiki/Mutual%20coherence%20%28linear%20algebra%29,"In linear algebra, the coherence or mutual coherence of a matrix A is defined as the maximum absolute value of the cross-correlations between the columns of A. Formally, let be the columns of the matrix A, which are assumed to be normalized such that The mutual coherence of A is then defined as A lower bound is A deterministic matrix with the mutual coherence almost meeting the lower bound can be constructed by Weil's theorem. This concept was reintroduced by David Donoho and Michael Elad in the context of sparse representations. A special case of this definition for the two-ortho case appeared earlier in the paper by Donoho and Huo. The mutual coherence has since been used extensively in the field of sparse representations of signals. In particular, it is used as a measure of the ability of suboptimal algorithms such as matching pursuit and basis pursuit to correctly identify the true representation of a sparse signal. Joel Tropp introduced a useful extension of Mutual Coherence, known as the Babel function, which extends the idea of cross-correlation between pairs of columns to the cross-correlation from one column to a set of other columns. The Babel function for two columns is exactly the Mutual coherence, but it also extends the coherence relationship concept in a way that is useful and relevant for any number of columns in the sparse representation matix as well. See also Compressed sensing Restricted isometry property Babel function" https://en.wikipedia.org/wiki/Activation,"Activation, in chemistry and biology, is the process whereby something is prepared or excited for a subsequent reaction. Chemistry In chemistry, ""activation"" refers to the reversible transition of a molecule into a nearly identical chemical or physical state, with the defining characteristic being that this resultant state exhibits an increased propensity to undergo a specified chemical reaction. Thus, activation is conceptually the opposite of protection, in which the resulting state exhibits a decreased propensity to undergo a certain reaction. The energy of activation specifies the amount of free energy the reactants must possess (in addition to their rest energy) in order to initiate their conversion into corresponding products—that is, in order to reach the transition state for the reaction. The energy needed for activation can be quite small, and often it is provided by the natural random thermal fluctuations of the molecules themselves (i.e. without any external sources of energy). The branch of chemistry that deals with this topic is called chemical kinetics. Biology Biochemistry In biochemistry, activation, specifically called bioactivation, is where enzymes or other biologically active molecules acquire the ability to perform their biological function, such as inactive proenzymes being converted into active enzymes that are able to catalyze their substrates' reactions into products. Bioactivation may also refer to the process where inactive prodrugs are converted into their active metabolites, or the toxication of protoxins into actual toxins. An enzyme may be reversibly or irreversibly bioactivated. A major mechanism of irreversible bioactivation is where a piece of a protein is cut off by cleavage, producing an enzyme that will then stay active. A major mechanism of reversible bioactivation is substrate presentation where an enzyme translocates near its substrate. Another reversible reaction is where a cofactor binds to an enzyme, which then rem" https://en.wikipedia.org/wiki/Phototaxis,"Phototaxis is a kind of taxis, or locomotory movement, that occurs when a whole organism moves towards or away from a stimulus of light. This is advantageous for phototrophic organisms as they can orient themselves most efficiently to receive light for photosynthesis. Phototaxis is called positive if the movement is in the direction of increasing light intensity and negative if the direction is opposite. Two types of positive phototaxis are observed in prokaryotes. The first is called scotophobotaxis (from the word ""scotophobia""), which is observed only under a microscope. This occurs when a bacterium swims by chance out of the area illuminated by the microscope. Entering darkness signals the cell to reverse flagella rotation direction and reenter the light. The second type of phototaxis is true phototaxis, which is a directed movement up a gradient to an increasing amount of light. This is analogous to positive chemotaxis except that the attractant is light rather than a chemical. Phototactic responses are observed in many organisms such as Serratia marcescens, Tetrahymena, and Euglena. Each organism has its own specific biological cause for a phototactic response, many of which are incidental and serve no end purpose. Phototaxis in bacteria and archea Phototaxis can be advantageous for phototrophic bacteria as they can orient themselves most efficiently to receive light for photosynthesis. Phototaxis is called positive if the movement is in the direction of increasing light intensity and negative if the direction is opposite. Two types of positive phototaxis are observed in prokaryotes (bacteria and archea). The first is called ""scotophobotaxis"" (from the word ""scotophobia""), which is observed only under a microscope. This occurs when a bacterium swims by chance out of the area illuminated by the microscope. Entering darkness signals the cell to reverse flagella rotation direction and reenter the light. The second type of phototaxis is true phototaxis, which " https://en.wikipedia.org/wiki/Hazards%20of%20synthetic%20biology,"The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and hazards to the environment. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals; however, novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential biosecurity risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) also apply to synthetic organisms. ""Extrinsic"" biocontainment methods used in laboratories include biosafety cabinets and gloveboxes, as well as personal protective equipment. In agriculture, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms might potentially offer increased hazard control because they can be engineered with ""intrinsic"" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA. Existing risk analysis systems for GMOs are generally applicable to synthetic organisms, although there may be" https://en.wikipedia.org/wiki/Big%20O%20notation,"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by German mathematicians Paul Bachmann, Edmund Landau, and others, collectively called Bachmann–Landau notation or asymptotic notation. The letter O was chosen by Bachmann to stand for Ordnung, meaning the order of approximation. In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows. In analytic number theory, big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem. Big O notation is also used in many other fields to provide similar estimates. Big O notation characterizes functions according to their growth rates: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function. A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols , and , to describe other kinds of bounds on asymptotic growth rates. Formal definition Let , the function to be estimated, be a real or complex valued function and let , the comparison function, be a real valued function. Let both functions be defined on some unbounded subset of the positive real numbers, and be strictly positive for all large enough values of . One writes and it is read "" is big O of "" if the absolute value of is at most a positive constant multiple of for all sufficiently large values of . That is, if there exists a positive real number and a re" https://en.wikipedia.org/wiki/Kinetic%20proofreading,"Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways. Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further. Specificity paradox In protein synthesis, the error rate is on the order of . This means that when a ribosome is matching anticodons of tRNA to the codons of mRNA, it matches complementary sequences correctly nearly all the time. Hopfield noted that because of how similar the substrates are (the difference between a wrong codon and a right codon can be as small as a difference in a single base), an error rate that small is unachievable with a one-step mechanism. Both wrong and right tRNA can bind to the ribosome, and if the ribosome can only discriminate between them by complementary matching of the anticodon, it must rely on the small free energy difference between binding three matched complementary bases or only two. A one-shot machine which tests whether the codons match or not by examining whether the codon and anticodon are bound will not be able to tell the difference between wrong and right codon" https://en.wikipedia.org/wiki/Formula%20calculator,"A formula calculator is a software calculator that can perform a calculation in two steps: Enter the calculation by typing it in from the keyboard. Press a single button or key to see the final result. This is unlike button-operated calculators, such as the Windows calculator or the Mac OS X calculator, which require the user to perform one step for each operation, by pressing buttons to calculate all the intermediate values, before the final result is shown. In this context, a formula is also known as an expression, and so formula calculators may be called expression calculators. Also in this context, calculation is known as evaluation, and so they may be called formula evaluators, rather than calculators. How they work Formulas as they are commonly written use infix notation for binary operators, such as addition, multiplication, division and subtraction. This notation also uses: Parentheses to enclose parts of a formula that must be calculated first. In the absence of parentheses, operator precedence, so that higher precedence operators, such as multiplication, must be applied before lower precedence operators, such as addition. For example, in 2 + 3*4, the multiplication, 3*4, is done first. Among operators with the same precedence, associativity, so that the left-most operator must be applied first. For example, in 2 - 3 + 4, the subtraction, 2 - 3, is done first. Also, formulas may contain: Non-commutative operators that must be applied to numbers in the correct order, such as subtraction and division. The same symbol used for more than one purpose, such as - for negative numbers and subtraction. Once a formula is entered, a formula calculator follows the above rules to produce the final result by automatically: Analysing the formula and breaking it down into its constituent parts, such as operators, numbers and parentheses. Finding both operands of each binary operator. Working out the values of these operands. Applying the operator to th" https://en.wikipedia.org/wiki/Gauss%20notation,"Gauss notation (also known as a Gauss code or Gauss words) is a notation for mathematical knots. It is created by enumerating and classifying the crossings of an embedding of the knot in a plane. It is named after the German mathematician Carl Friedrich Gauss (1777–1855). Gauss code represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labelled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3. Gauss code is limited in its ability to identify knots by a few problems. The starting point on the knot at which to begin tracing the crossings is arbitrary, and there is no way to determine which direction to trace in. Also, Gauss code is unable to indicate the handedness of each crossing, which is necessary to identify a knot versus its mirror. For example, the Gauss code for the trefoil knot does not specify if it is the right-handed or left-handed trefoil. This last issue is often solved by using the extended Gauss code. In this modification, the positive/negative sign on the second instance of every number is chosen to represent the handedness of that crossing, rather than the over/under sign of the crossing, which is made clear in the first instance of the number. A right-handed crossing is given a positive number, and a left handed crossing is given a negative number." https://en.wikipedia.org/wiki/Observer%20%28quantum%20physics%29,"Some interpretations of quantum mechanics posit a central role for an observer of a quantum phenomenon. The quantum mechanical observer is tied to the issue of observer effect, where a measurement necessarily requires interacting with the physical object being measured, affecting its properties through the interaction. The term ""observable"" has gained a technical meaning, denoting a Hermitian operator that represents a measurement. The prominence of seemingly subjective or anthropocentric ideas like ""observer"" in the early development of the theory has been a continuing source of disquiet and philosophical dispute. A number of new-age religious or philosophical views give the observer a more special role, or place constraints on who or what can be an observer. There is no credible peer-reviewed research that backs such claims. As an example of such claims, Fritjof Capra declared, ""The crucial feature of atomic physics is that the human observer is not only necessary to observe the properties of an object, but is necessary even to define these properties."" The Copenhagen interpretation, which is the most widely accepted interpretation of quantum mechanics among physicists, posits that an ""observer"" or a ""measurement"" is merely a physical process. One of the founders of the Copenhagen interpretation, Werner Heisenberg, wrote: Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature. The observer has, rather, only the function of registering decisions, i.e., processes in space and time, and it does not matter whether the observer is an apparatus or a human being; but the registration, i.e., the transition from the ""possible"" to the ""actual,"" is absolutely necessary here and cannot be omitted from the interpretation of quantum theory. Niels Bohr, also a founder of the Copenhagen interpretation, wrote: all unambiguous information concerning atomic objects is " https://en.wikipedia.org/wiki/Printed%20circuit%20board,"A printed circuit board (PCB), also called printed wiring board (PWB), is a medium used to connect or ""wire"" components to one another in a circuit. It takes the form of a laminated sandwich structure of conductive and insulating layers: each of the conductive layers is designed with an artwork pattern of traces, planes and other features (similar to wires on a flat surface) etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate. Electrical components may be fixed to conductive pads on the outer layers in the shape designed to accept the component's terminals, generally by means of soldering, to both electrically connect and mechanically fasten them to it. Another manufacturing process adds vias, plated-through holes that allow interconnections between layers. Printed circuit boards are used in nearly all electronic products. Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now rarely used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Electronic design automation software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, and the layout has to be done only once. PCBs can also be made manually in small quantities, with reduced benefits. PCBs can be single-sided (one copper layer), double-sided (two copper layers on both sides of one substrate layer), or multi-layer (outer and inner layers of copper, alternating with layers of substrate). Multi-layer PCBs allow for much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components. The rise in popularity of multilayer PCBs with more than two, and especially with more than four, copper p" https://en.wikipedia.org/wiki/Modularity%20%28biology%29,"Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism. Evolution of Modularity The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action: [1] Selection for the rate of adaptation: If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate. [2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined. [3] Stabilizing selection: While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be ""an important counter force against the evolution of modularity"". Stabilizing selection, if ubiquitously spread across the network, could then be a ""wall"" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed" https://en.wikipedia.org/wiki/Named%20data%20networking,"Named Data Networking (NDN) (related to content-centric networking (CCN), content-based networking, data-oriented networking or information-centric networking (ICN)) is a proposed Future Internet architecture inspired by years of empirical research into network usage and a growing awareness of unsolved problems in contemporary internet architectures like IP. NDN has its roots in an earlier project, Content-Centric Networking (CCN), which Van Jacobson first publicly presented in 2006. The NDN project is investigating Jacobson's proposed evolution from today's host-centric network architecture IP to a data-centric network architecture (NDN). The belief is that this conceptually simple shift will have far-reaching implications for how people design, develop, deploy, and use networks and applications. NDN has three core concepts that distinguish NDN from other network architectures. First, applications name data and data names will directly be used in network packet forwarding; consumer applications request desired data by its name, so communications in NDN are consumer-driven. Second, NDN communications are secured in a data-centric manner, that is, each piece of data (called a Data packet) will be cryptographically signed by its producer and sensitive payload or name components can also be encrypted for the purpose of privacy; in this way, consumers can verify the packet regardless of how the packet is fetched. Third, NDN adopts a stateful forwarding plane where forwarders will keep a state for each data request (called an Interest packet) and erase the state when a corresponding Data packet comes back; NDN's stateful forwarding allows intelligent forwarding strategies and eliminates loops. Its premise is that the Internet is primarily used as an information distribution network, which is not a good match for IP, and that the future Internet's ""thin waist"" should be based on named data rather than numerically addressed hosts. The underlying principle is that a comm" https://en.wikipedia.org/wiki/Decimal%20representation,"A decimal representation of a non-negative real number is its expression as a sequence of symbols consisting of decimal digits traditionally written with a single separator: Here is the decimal separator, is a nonnegative integer, and are digits, which are symbols representing integers in the range 0, ..., 9. Commonly, if The sequence of the —the digits after the dot—is generally infinite. If it is finite, the lacking digits are assumed to be 0. If all are , the separator is also omitted, resulting in a finite sequence of digits, which represents a natural number. The decimal representation represents the infinite sum: Every nonnegative real number has at least one such representation; it has two such representations (with if ) if and only if one has a trailing infinite sequence of , and the other has a trailing infinite sequence of . For having a one-to-one correspondence between nonnegative real numbers and decimal representations, decimal representations with a trailing infinite sequence of are sometimes excluded. Integer and fractional parts The natural number , is called the integer part of , and is denoted by in the remainder of this article. The sequence of the represents the number which belongs to the interval and is called the fractional part of (except when all are ). Finite decimal approximations Any real number can be approximated to any desired degree of accuracy by rational numbers with finite decimal representations. Assume . Then for every integer there is a finite decimal such that: Proof: Let , where . Then , and the result follows from dividing all sides by . (The fact that has a finite decimal representation is easily established.) Non-uniqueness of decimal representation and notational conventions Some real numbers have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by 0.999... (where the infinite sequences of trailing 0's or 9's, respectively, are repre" https://en.wikipedia.org/wiki/Prime%20factor%20exponent%20notation,"In his 1557 work The Whetstone of Witte, British mathematician Robert Recorde proposed an exponent notation by prime factorisation, which remained in use up until the eighteenth century and acquired the name Arabic exponent notation. The principle of Arabic exponents was quite similar to Egyptian fractions; large exponents were broken down into smaller prime numbers. Squares and cubes were so called; prime numbers from five onwards were called sursolids. Although the terms used for defining exponents differed between authors and times, the general system was the primary exponent notation until René Descartes devised the Cartesian exponent notation, which is still used today. This is a list of Recorde's terms. By comparison, here is a table of prime factors: See also Surd External links (references) Mathematical dictionary, Chas Hutton, pg 224 Mathematical notation" https://en.wikipedia.org/wiki/Outline%20of%20algebra,"The following outline is provided as an overview of and topical guide to algebra: Algebra is one of the main branches of mathematics, covering the study of structure, relation and quantity. Algebra studies the effects of adding and multiplying numbers, variables, and polynomials, along with their factorization and determining their roots. In addition to working directly with numbers, algebra also covers symbols, variables, and set elements. Addition and multiplication are general operations, but their precise definitions lead to structures such as groups, rings, and fields. Branches Pre-algebra Elementary algebra Boolean algebra Abstract algebra Linear algebra Universal algebra Algebraic equations An algebraic equation is an equation involving only algebraic expressions in the unknowns. These are further classified by degree. Linear equation – algebraic equation of degree one. Polynomial equation – equation in which a polynomial is set equal to another polynomial. Transcendental equation – equation involving a transcendental function of one of its variables. Functional equation – equation in which the unknowns are functions rather than simple quantities. Differential equation – equation involving derivatives. Integral equation – equation involving integrals. Diophantine equation – equation where the only solutions of interest of the unknowns are the integer ones. History History of algebra General algebra concepts Fundamental theorem of algebra – states that every non-constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomials with real coefficients, since every real number is a complex number with an imaginary part equal to zero. Equations – equality of two mathematical expressions Linear equation – an algebraic equation with a degree of one Quadratic equation – an algebraic equation with a degree of two Cubic equation – an algebraic equation with a degree of three Quartic equati" https://en.wikipedia.org/wiki/Device%20under%20test,"A device under test (DUT), also known as equipment under test (EUT) and unit under test (UUT), is a manufactured product undergoing testing, either at first manufacture or later during its life cycle as part of ongoing functional testing and calibration checks. This can include a test after repair to establish that the product is performing in accordance with the original product specification. Electronics testing In the electronics industry a DUT is any electronic assembly under test. For example, cell phones coming off of an assembly line may be given a final test in the same way as the individual chips were earlier tested. Each cell phone under test is, briefly, the DUT. For circuit boards, the DUT is often connected to the test equipment using a bed of nails tester of pogo pins. Semiconductor testing In semiconductor testing, the device under test is a die on a wafer or the resulting packaged part. A connection system is used, connecting the part to automatic or manual test equipment. The test equipment then applies power to the part, supplies stimulus signals, then measures and evaluates the resulting outputs from the device. In this way, the tester determines whether the particular device under test meets the device specifications. While packaged as a wafer, automatic test equipment (ATE) can connect to the individual units using a set of microscopic needles. Once the chips are sawn apart and packaged, test equipment can connect to the chips using ZIF sockets (sometimes called contactors). See also Automatic test equipment DUT board Product testing System under test Test bench" https://en.wikipedia.org/wiki/C-Thru%20Ruler,"The C-Thru Ruler Company is an American maker of measuring devices and specialized products for drafting, designing and drawing. The company was formed in 1939 in Bloomfield, Connecticut, by Jennie R. Zachs, a schoolteacher, who saw the need for transparent measuring tools such as rulers, triangles, curves and protractors. During the 1990s, the company expanded into the paper crafting and scrapbooking fields under the Little Yellow Bicycle and Déjà Views brands. In June 2012, Acme United Corporation bought the ruler, lettering and drafting portions of C-Thru Ruler. The scrap booking part of the business, continues to be managed by the Zachs family under the Little Yellow Bicycle Inc. name. History Jennie Zachs Jennie R. Zachs, born in 1898, was the daughter of Benjamin and Julia Zachs who emigrated from Russia to the United States in the early 1900s. She graduated from high school in Hartford, CT. A few years later, she graduated from college and became a schoolteacher. While teaching, she developed the idea that when students would be able to see through their rulers, it would make the tool much more useful in the classroom. As a result, Ms. Zachs started the development of two transparent rulers made out of plastic. In 1939, she founded C-Thru Ruler Company in Bloomfield, Connecticut and designed a whole family of transparent measuring tools like rulers, triangles, curves and protractors. Shortly after, she engaged a supplier to mill the tools out of plastic sheet and began to attend different trade shows and conventions for blue printers and art materials dealers to sell the products. She noticed that the transparent measuring tools could effectively replace wood and metal measuring devices for many applications in drafting, designing and drawing. 1940–1969 Only one year after founding the company, Jennie Zachs took in two partners to handle the expansion of C-Thru. Edward Zachs, her brother, joined C-Thru and Anna Zachs, her sister, became an investor. O" https://en.wikipedia.org/wiki/Organisms%20at%20high%20altitude,"Organisms can live at high altitude, either on land, in water, or while flying. Decreased oxygen availability and decreased temperature make life at such altitudes challenging, though many species have been successfully adapted via considerable physiological changes. As opposed to short-term acclimatisation (immediate physiological response to changing environment), high-altitude adaptation means irreversible, evolved physiological responses to high-altitude environments, associated with heritable behavioural and genetic changes. Among vertebrates, only few mammals (such as yaks, ibexes, Tibetan gazelles, vicunas, llamas, mountain goats, etc.) and certain birds are known to have completely adapted to high-altitude environments. Human populations such as some Tibetans, South Americans and Ethiopians live in the otherwise uninhabitable high mountains of the Himalayas, Andes and Ethiopian Highlands respectively. The adaptation of humans to high altitude is an example of natural selection in action. High-altitude adaptations provide examples of convergent evolution, with adaptations occurring simultaneously on three continents. Tibetan humans and Tibetan domestic dogs share a genetic mutation in EPAS1, but it has not been seen in Andean humans. Invertebrates Tardigrades live over the entire world, including the high Himalayas. Tardigrades are also able to survive temperatures of close to absolute zero (), temperatures as high as , radiation that would kill other animals, and almost a decade without water. Since 2007, tardigrades have also returned alive from studies in which they have been exposed to the vacuum of outer space in low Earth orbit. Other invertebrates with high-altitude habitats are Euophrys omnisuperstes, a spider that lives in the Himalaya range at altitudes of up to ; it feeds on stray insects that are blown up the mountain by the wind. The springtail Hypogastrura nivicola (one of several insects called snow fleas) also lives in the Himalayas. It " https://en.wikipedia.org/wiki/Manycore%20processor,"Manycore processors are special kinds of multi-core processors designed for a high degree of parallel processing, containing numerous simpler, independent processor cores (from a few tens of cores to thousands or more). Manycore processors are used extensively in embedded computers and high-performance computing. Contrast with multicore architecture Manycore processors are distinct from multi-core processors in being optimized from the outset for a higher degree of explicit parallelism, and for higher throughput (or lower power consumption) at the expense of latency and lower single-thread performance. The broader category of multi-core processors, by contrast, are usually designed to efficiently run both parallel and serial code, and therefore place more emphasis on high single-thread performance (e.g. devoting more silicon to out of order execution, deeper pipelines, more superscalar execution units, and larger, more general caches), and shared memory. These techniques devote runtime resources toward figuring out implicit parallelism in a single thread. They are used in systems where they have evolved continuously (with backward compatibility) from single core processors. They usually have a 'few' cores (e.g. 2, 4, 8) and may be complemented by a manycore accelerator (such as a GPU) in a heterogeneous system. Motivation Cache coherency is an issue limiting the scaling of multicore processors. Manycore processors may bypass this with methods such as message passing, scratchpad memory, DMA, partitioned global address space, or read-only/non-coherent caches. A manycore processor using a network on a chip and local memories gives software the opportunity to explicitly optimise the spatial layout of tasks (e.g. as seen in tooling developed for TrueNorth). Manycore processors may have more in common (conceptually) with technologies originating in high-performance computing such as clusters and vector processors. GPUs may be considered a form of manycore process" https://en.wikipedia.org/wiki/Gas%20exchange,"Gas exchange is the physical process by which gases move passively by diffusion across a surface. For example, this surface might be the air/water interface of a water body, the surface of a gas bubble in a liquid, a gas-permeable membrane, or a biological membrane that forms the boundary between an organism and its extracellular environment. Gases are constantly consumed and produced by cellular and metabolic reactions in most living things, so an efficient system for gas exchange between, ultimately, the interior of the cell(s) and the external environment is required. Small, particularly unicellular organisms, such as bacteria and protozoa, have a high surface-area to volume ratio. In these creatures the gas exchange membrane is typically the cell membrane. Some small multicellular organisms, such as flatworms, are also able to perform sufficient gas exchange across the skin or cuticle that surrounds their bodies. However, in most larger organisms, which have small surface-area to volume ratios, specialised structures with convoluted surfaces such as gills, pulmonary alveoli and spongy mesophylls provide the large area needed for effective gas exchange. These convoluted surfaces may sometimes be internalised into the body of the organism. This is the case with the alveoli, which form the inner surface of the mammalian lung, the spongy mesophyll, which is found inside the leaves of some kinds of plant, or the gills of those molluscs that have them, which are found in the mantle cavity. In aerobic organisms, gas exchange is particularly important for respiration, which involves the uptake of oxygen () and release of carbon dioxide (). Conversely, in oxygenic photosynthetic organisms such as most land plants, uptake of carbon dioxide and release of both oxygen and water vapour are the main gas-exchange processes occurring during the day. Other gas-exchange processes are important in less familiar organisms: e.g. carbon dioxide, methane and hydrogen are exchanged a" https://en.wikipedia.org/wiki/Logic%20redundancy,"Logic redundancy occurs in a digital gate network containing circuitry that does not affect the static logic function. There are several reasons why logic redundancy may exist. One reason is that it may have been added deliberately to suppress transient glitches (thus causing a race condition) in the output signals by having two or more product terms overlap with a third one. Consider the following equation: The third product term is a redundant consensus term. If switches from 1 to 0 while and , remains 1. During the transition of signal in logic gates, both the first and second term may be 0 momentarily. The third term prevents a glitch since its value of 1 in this case is not affected by the transition of signal . Another reason for logic redundancy is poor design practices which unintentionally result in logically redundant terms. This causes an unnecessary increase in network complexity, and possibly hampering the ability to test manufactured designs using traditional test methods (single stuck-at fault models). Testing might be possible using IDDQ models. Removing logic redundancy Logic redundancy is, in general, not desired. Redundancy, by definition, requires extra parts (in this case: logical terms) which raises the cost of implementation (either actual cost of physical parts or CPU time to process). Logic redundancy can be removed by several well-known techniques, such as Karnaugh maps, the Quine–McCluskey algorithm, and the heuristic computer method. Adding logic redundancy In some cases it may be desirable to add logic redundancy. One of those cases is to avoid race conditions whereby an output can fluctuate because different terms are ""racing"" to turn off and on. To explain this in more concrete terms the Karnaugh map to the right shows the minterms for the following function: The boxes represent the minimal AND/OR terms needed to implement this function: The k-map visually shows where race conditions occur in the minimal expression by h" https://en.wikipedia.org/wiki/Universal%20Test%20Specification%20Language,"Universal Test Specification Language (UTSL) is a programming language used to describe ASIC tests in a format that leads to an automated translation of the test specification into an executable test code. UTSL is platform independent and provided a code generation interface for a specific platform is available, UTSL code can be translated into the programming language of a specific Automatic Test Equipment (ATE). History Increased complexity of ASICs leads to requirements of more complex test programs with longer development times. An automated test program generation could simplify and speed up this process. Teradyne Inc. together with Robert Bosch GmbH agreed to develop a concept and a tool chain for an automated test-program generation. To achieve this a tester independent programming language was required. Hence, UTSL, a programming language that enables detailed description of tests that can be translated into the ATE specific programming language was developed. The ATE manufacturers need to provide a Test Program Generator that uses the UTSL test description as inputs and generates the ATE-specific test code with optimal resource mapping and better practice program code. As long as the ATE manufacturer provides with the test program generator that can use UTSL as an input the cumbersome task of translating a test program from one platform to another can be significantly simplified. In other words, the task of rewriting of the test programs for a specific platform can be replaced by the automatically generating the code from the UTSL based test specification. Prerequisite for this is that the UTSL description of tests is sufficiently detailed with definition of the test technique as well as the description of all the necessary inputs and outputs. Being a platform independent programming language, UTSL allows the engineers to read, analyse and modify the tests in the test specification regardless of the ATE at which the testing of the ASIC will be done. UTS" https://en.wikipedia.org/wiki/ERP%20security,"ERP Security is a wide range of measures aimed at protecting Enterprise resource planning (ERP) systems from illicit access ensuring accessibility and integrity of system data. ERP system is a computer software that serves to unify the information intended to manage the organization including Production, Supply Chain Management, Financial Management, Human Resource Management, Customer Relationship Management, Enterprise Performance Management. Review ERP system integrates business processes enabling procurement, payment, transport, human resources management, product management, and financial planning. As ERP system stores confidential information, the Information Systems Audit and Control Association (ISACA) recommends to regularly conduct a comprehensive assessment of ERP system security, checking ERP servers for software vulnerabilities, configuration errors, segregation of duties conflicts, compliance with relevant standards and recommendations, and recommendations of vendors. Causes for vulnerabilities in ERP systems Complexity ERP systems process transactions and implement procedures to ensure that users have different access privileges. There are hundreds of authorization objects in SAP permitting users to perform actions in the system. In case of 200 users of the company, there are approximately 800,000 (100*2*20*200) ways to customize security settings of ERP systems. With the growth of complexity, the possibility of errors and segregation of duties conflicts increases. Specificity Vendors fix vulnerabilities on the regular basis since hackers monitor business applications to find and exploit security issues. SAP releases patches monthly on Patch Tuesday, Oracle issues security fixes every quarter in Oracle Critical Patch Update. Business applications are becoming more exposed to the Internet or migrate to the cloud. Lack of competent specialists ERP Cybersecurity survey revealed that organizations running ERP systems ""lack both awareness and a" https://en.wikipedia.org/wiki/List%20of%20graphical%20methods,"This is a list of graphical methods with a mathematical basis. Included are diagram techniques, chart techniques, plot techniques, and other forms of visualization. There is also a list of computer graphics and descriptive geometry topics. Simple displays Area chart Box plot Dispersion fan diagram Graph of a function Logarithmic graph paper Heatmap Bar chart Histogram Line chart Pie chart Plotting Scatterplot Sparkline Stemplot Radar chart Set theory Venn diagram Karnaugh diagram Descriptive geometry Isometric projection Orthographic projection Perspective (graphical) Engineering drawing Technical drawing Graphical projection Mohr's circle Pantograph Circuit diagram Smith chart Sankey diagram Systems analysis Binary decision diagram Control-flow graph Functional flow block diagram Information flow diagram IDEF N2 chart Sankey diagram State diagram System context diagram Data-flow diagram Cartography Map projection Orthographic projection (cartography) Robinson projection Stereographic projection Dymaxion map Topographic map Craig retroazimuthal projection Hammer retroazimuthal projection Biological sciences Cladogram Punnett square Systems Biology Graphical Notation Physical sciences Free body diagram Greninger chart Phase diagram Wavenumber-frequency diagram Bode plot Nyquist plot Dalitz plot Feynman diagram Carnot Plot Business methods Flowchart Workflow Gantt chart Growth-share matrix (often called BCG chart) Work breakdown structure Control chart Ishikawa diagram Pareto chart (often used to prioritise outputs of an Ishikawa diagram) Conceptual analysis Mind mapping Concept mapping Conceptual graph Entity-relationship diagram Tag cloud, also known as word cloud Statistics Autocorrelation plot Bar chart Biplot Box plot Bullet graph Chernoff faces Control chart Fan chart Forest plot Funnel plot Galbraith plot Histogram Mosaic plot Multidimensional scaling np-chart p-chart Pie chart Probability plot Normal probability plot Poincaré plot Probability plot" https://en.wikipedia.org/wiki/Clockwise,"Two-dimensional rotation can occur in two possible directions or senses of rotation. Clockwise motion (abbreviated CW) proceeds in the same direction as a clock's hands: from the top to the right, then down and then to the left, and back up to the top. The opposite sense of rotation or revolution is (in Commonwealth English) anticlockwise (ACW) or (in North American English) counterclockwise (CCW). Three-dimensional rotation can have similarly defined senses when considering the corresponding angular velocity vector. Terminology Before clocks were commonplace, the terms ""sunwise"" and ""deasil"", ""deiseil"" and even ""deocil"" from the Scottish Gaelic language and from the same root as the Latin ""dexter"" (""right"") were used for clockwise. ""Widdershins"" or ""withershins"" (from Middle Low German ""weddersinnes"", ""opposite course"") was used for counterclockwise. The terms clockwise and counterclockwise can only be applied to a rotational motion once a side of the rotational plane is specified, from which the rotation is observed. For example, the daily rotation of the Earth is clockwise when viewed from above the South Pole, and counterclockwise when viewed from above the North Pole (considering ""above a point"" to be defined as ""farther away from the center of earth and on the same ray""). Clocks traditionally follow this sense of rotation because of the clock's predecessor: the sundial. Clocks with hands were first built in the Northern Hemisphere (see Clock), and they were made to work like horizontal sundials. In order for such a sundial to work north of the equator during spring and summer, and north of the Tropic of Cancer the whole year, the noon-mark of the dial must be placed northward of the pole casting the shadow. Then, when the Sun moves in the sky (from east to south to west), the shadow, which is cast on the sundial in the opposite direction, moves with the same sense of rotation (from west to north to east). This is why hours must be drawn in horizontal sundi" https://en.wikipedia.org/wiki/Vintage%20computer,"A vintage computer is an older computer system that is largely regarded as obsolete. The personal computer has been around since approximately 1971. But in that time, numerous technological revolutions have left generations of obsolete computing equipment on the junk heap. Nevertheless, in that time, these otherwise useless computers have spawned a sub-culture of vintage computer collectors, who often spend large sums to acquire the rarest of these items, not only to display but restore to their fully functioning glory, including active software development and adaptation to modern uses. This often includes homebrew developers and hackers who add on, update and create hybrid composites from new and old computers for uses for which they were otherwise never intended. Ethernet interfaces have been designed for many vintage 8-bit machines to allow limited connectivity to the Internet; where users can access user groups, bulletin boards, and databases of software. Most of this hobby centers on those computers manufactured after 1960, though some collectors specialize in pre-1960 computers as well. The Vintage Computer Festival, an event held by the Vintage Computer Federation for the exhibition and celebration of vintage computers, has been held annually since 1997 and has expanded internationally. By platform MITS Inc. Micro Instrumentation and Telemetry Systems (MITS) produced the Altair 8800 in 1975. According to Harry Garland, the Altair 8800 was the product that catalyzed the microcomputer revolution of the 1970s. IMSAI IMSAI produced a machine similar to the Altair 8800. It was introduced in 1975, first as a kit, and later as an assembled system. The list price was $591 () for a kit, and $931 () assembled. Processor Technology Processor Technology produced the Sol-20. This was one of the first machines to have a case that included a keyboard; a design feature copied by many of later ""home computers"". SWTPC Southwest Technical Products Corporation (" https://en.wikipedia.org/wiki/List%20of%20finite%20simple%20groups,"In mathematics, the classification of finite simple groups states that every finite simple group is cyclic, or alternating, or in one of 16 families of groups of Lie type, or one of 26 sporadic groups. The list below gives all finite simple groups, together with their order, the size of the Schur multiplier, the size of the outer automorphism group, usually some small representations, and lists of all duplicates. Summary The following table is a complete list of the 18 families of finite simple groups and the 26 sporadic simple groups, along with their orders. Any non-simple members of each family are listed, as well as any members duplicated within a family or between families. (In removing duplicates it is useful to note that no two finite simple groups have the same order, except that the group A8 = A3(2) and A2(4) both have order 20160, and that the group Bn(q) has the same order as Cn(q) for q odd, n > 2. The smallest of the latter pairs of groups are B3(3) and C3(3) which both have order 4585351680.) There is an unfortunate conflict between the notations for the alternating groups An and the groups of Lie type An(q). Some authors use various different fonts for An to distinguish them. In particular, in this article we make the distinction by setting the alternating groups An in Roman font and the Lie-type groups An(q) in italic. In what follows, n is a positive integer, and q is a positive power of a prime number p, with the restrictions noted. The notation (a,b) represents the greatest common divisor of the integers a and b. Cyclic groups, Zp Simplicity: Simple for p a prime number. Order: p Schur multiplier: Trivial. Outer automorphism group: Cyclic of order p − 1. Other names: Z/pZ, Cp Remarks: These are the only simple groups that are not perfect. Alternating groups, An, n > 4 Simplicity: Solvable for n < 5, otherwise simple. Order: n!/2 when n > 1. Schur multiplier: 2 for n = 5 or n > 7, 6 for n = 6 or 7; see Covering groups of the alt" https://en.wikipedia.org/wiki/West%20Bridge,"The West Bridge is a growing architectural approach, originally developed by Cypress Semiconductor, which enhances and modularizes a peripheral controller in an embedded computer architecture. Conceptually, the West Bridge parallels and complements the decentralization represented by the North Bridge and the South Bridge. Most notably, it has been used by Research in Motion to permit extremely high data transfer rates in its BlackBerry devices. Overview While the North Bridge focuses on memory control and the South Bridge focuses on ""slower"" capabilities of the motherboard, the West Bridge focuses on peripheral control. The new architectural modularization opens the potential for increased system performance. Being directly connected, peripheral control can be handled wholly and independently through a West Bridge's controller, leaving a processor offloaded and free to focus on other data intensive operations. While it enhances performance of the system via the processor, a West Bridge companion chip itself may also serve directly as a peripheral accelerator. Etymology The term West Bridge was first introduced by Cypress Semiconductor, which designs products to provide optimal performance and connectivity in the embedded world. The name was chosen deliberately to be a meme consistent with the North Bridge and South Bridge concepts. ""West Bridge"" refers both to the architectural scheme in general and to the product family with which it was introduced by Cypress. Interface Support Interfaces regularly change towards faster, lower power, fewer pins, and newer standards, making it a difficult task for processors to follow and integrate them. A prime function of West Bridge devices is to enable connection to these varied interfaces. An example of such an interface is NAND Flash, which keeps evolving with new generations of Multi-Level Cell NAND. A West Bridge device might handle the MLC NAND management and enable lowest-cost memory support for a main processor, whi" https://en.wikipedia.org/wiki/Highly%20composite%20number," A highly composite number or antiprime is a positive integer with more divisors than any smaller positive integer has. A related concept is that of a largely composite number, a positive integer which has at least as many divisors as any smaller positive integer. The name can be somewhat misleading, as the first two highly composite numbers (1 and 2) are not actually composite numbers; however, all further terms are. Ramanujan wrote a paper on highly composite numbers in 1915. The mathematician Jean-Pierre Kahane suggested that Plato must have known about highly composite numbers as he deliberately chose such a number, 5040 (= 7!), as the ideal number of citizens in a city. Examples The initial or smallest 40 highly composite numbers are listed in the table below . The number of divisors is given in the column labeled d(n). Asterisks indicate superior highly composite numbers. The divisors of the first 19 highly composite numbers are shown below. The table below shows all 72 divisors of 10080 by writing it as a product of two numbers in 36 different ways. The 15,000th highly composite number can be found on Achim Flammenkamp's website. It is the product of 230 primes: where is the sequence of successive prime numbers, and all omitted terms (a22 to a228) are factors with exponent equal to one (i.e. the number is ). More concisely, it is the product of seven distinct primorials: where is the primorial . Prime factorization Roughly speaking, for a number to be highly composite it has to have prime factors as small as possible, but not too many of the same. By the fundamental theorem of arithmetic, every positive integer n has a unique prime factorization: where are prime, and the exponents are positive integers. Any factor of n must have the same or lesser multiplicity in each prime: So the number of divisors of n is: Hence, for a highly composite number n, the k given prime numbers pi must be precisely the first k prime numbers (2, 3, 5, ..." https://en.wikipedia.org/wiki/Glossary%20of%20areas%20of%20mathematics,"Mathematics is a broad subject that is commonly divided in many areas that may be defined by their objects of study, by the used methods, or by both. For example, analytic number theory is a subarea of number theory devoted to the use of methods of analysis for the study of natural numbers. This glossary is alphabetically sorted. This hides a large part of the relationships between areas. For the broadest areas of mathematics, see . The Mathematics Subject Classification is a hierarchical list of areas and subjects of study that has been elaborated by the community of mathematicians. It is used by most publishers for classifying mathematical articles and books. A B C D E F G H I J K L M N O P Q R S T U V W See also Lists of mathematics topics Outline of mathematics :Category:Glossaries of mathematics" https://en.wikipedia.org/wiki/Host%20Based%20Security%20System,"Host Based Security System (HBSS) is the official name given to the United States Department of Defense (DOD) commercial off-the-shelf (COTS) suite of software applications used within the DOD to monitor, detect, and defend the DOD computer networks and systems. The Enterprise-wide Information Assurance and computer Network Defense Solutions Steering Group (ESSG) sponsored the acquisition of the HBSS System for use within the DOD Enterprise Network. HBSS is deployed on both the Non-Classified Internet Protocol Routed Network (NIPRNet) and Secret Internet Protocol Routed Network (SIPRNet) networks, with priority given to installing it on the NIPRNet. HBSS is based on McAfee, Inc's ePolicy Orchestrator (ePO) and other McAfee point product security applications such as Host Intrusion Prevention System (HIPS). History Seeing the need to supply a comprehensive, department-wide security suite of tools for DOD System Administrators, the ESSG started to gather requirements for the formation of a host-based security system in the summer of 2005. In March 2006, BAE Systems and McAfee were awarded a contract to supply an automated host-based security system to the department. After the award, 22 pilot sites were identified to receive the first deployments of HBSS. During the pilot roll out, DOD System Administrators around the world were identified and trained on using the HBSS software in preparation for software deployment across DOD. On October 9, 2007, the Joint Task Force for Global Network Operations (JTF-GNO) released Communications Tasking Order (CTO) 07-12 (Deployment of Host Based Security System (HBSS)) mandating the deployment of HBSS on all Combatant Command, Service and Agency (CC/S/A) networks within DOD with the completion date by the 3rd quarter of 2008. The release of this CTO brought HBSS to the attention of all major department heads and CC/S/A's, providing the ESSG with the necessary authority to enforce its deployment. Agencies not willing to co" https://en.wikipedia.org/wiki/List%20of%20physics%20concepts%20in%20primary%20and%20secondary%20education%20curricula,"This is a list of topics that are included in high school physics curricula or textbooks. Mathematical Background SI Units Scalar (physics) Euclidean vector Motion graphs and derivatives Pythagorean theorem Trigonometry Motion and forces Motion Force Linear motion Linear motion Displacement Speed Velocity Acceleration Center of mass Mass Momentum Newton's laws of motion Work (physics) Free body diagram Rotational motion Angular momentum (Introduction) Angular velocity Centrifugal force Centripetal force Circular motion Tangential velocity Torque Conservation of energy and momentum Energy Conservation of energy Elastic collision Inelastic collision Inertia Moment of inertia Momentum Kinetic energy Potential energy Rotational energy Electricity and magnetism Ampère's circuital law Capacitor Coulomb's law Diode Direct current Electric charge Electric current Alternating current Electric field Electric potential energy Electron Faraday's law of induction Ion Inductor Joule heating Lenz's law Magnetic field Ohm's law Resistor Transistor Transformer Voltage Heat Entropy First law of thermodynamics Heat Heat transfer Second law of thermodynamics Temperature Thermal energy Thermodynamic cycle Volume (thermodynamics) Work (thermodynamics) Waves Wave Longitudinal wave Transverse waves Transverse wave Standing Waves Wavelength Frequency Light Light ray Speed of light Sound Speed of sound Radio waves Harmonic oscillator Hooke's law Reflection Refraction Snell's law Refractive index Total internal reflection Diffraction Interference (wave propagation) Polarization (waves) Vibrating string Doppler effect Gravity Gravitational potential Newton's law of universal gravitation Newtonian constant of gravitation See also Outline of physics Physics education" https://en.wikipedia.org/wiki/Epibiont,"An epibiont (from the Ancient Greek meaning ""living on top of"") is an organism that lives on the surface of another living organism, called the basibiont (""living underneath""). The interaction between the two organisms is called epibiosis. An epibiont is, by definition, harmless to its host. In this sense, the interaction between the two organisms can be considered neutralistic or commensalistic; as opposed to being, for example, parasitic, in which case one organism benefits at the expense of the other, or mutualistic, in which both organisms obtain some explicit benefit from their coexistence. Examples of common epibionts are bacteria, barnacles, remoras, and algae, many of which live on the surfaces of larger marine organisms such as whales, sharks, sea turtles, and mangrove trees. Although there is no direct effect of the epibiont to the host, there are often indirect effects resulting from this interaction and change in the surface of the host. This has been found to be especially important to marine organisms and aquatic ecosystems, as surface qualities do impact necessary ecological functions such as drag, radiation absorption, nutrient uptake, etc. Types Epiphytes are plants that grow on the surface of other plants. Epizoic organisms are those that live on the surface of animals. Epibionts and their basibiont Epibiont: Korshikoviella gracilipes, Basibiont: Daphnia pulicaria Epibiont: Deltaproteobacteria, Basibiont: ""Candidatus Desulfobulbus rimicarensis"" Further examples Pagurus bernhardus and its epibionts P. bernhardus, or hermit crabs, acts as basibionts to many species of varying protozoa, hydrozoa, entoprocts, cirripeds, and polychaetes. The different types of epibionts are found on either the crab, the shell, or both the crab and the shell. In a study done over the course of two years, densities and diversity of epibionts were measured and considered. Multiple studies have found that P. bernardus in shells colonized with epibionts were likel" https://en.wikipedia.org/wiki/Augmented%20Reality%20Sandtable,"The Augmented Reality Sandtable (ARES) is an interactive, digital sand table that uses augmented reality (AR) technology to create a 3D battlespace map. It was developed by the Human Research and Engineering Directorate (HRED) at the Army Research Laboratory (ARL) to combine the positive aspects of traditional military sand tables with the latest digital technologies to better support soldier training and offer new possibilities of learning. It uses a projector to display a topographical map on top of the sand in a regular sandbox as well as a motion sensor that keeps track of changes in the layout of the sand to appropriately adjust the computer-generated terrain display. An ARL study conducted in 2017 with 52 active duty military personnel (36 males and 16 females) found that the participants who used ARES spent less time setting up the table compared to participants who used a traditional sand table. In addition, ARES demonstrated a lower perceived workload score, as measured using the NASA Task Load Index (NASA-TLX) ratings, compared to the traditional sand table. However, there was no significant difference in post-knowledge test scores in recreating the visual map. Development The ARES project was one of the 25 ARL initiatives in development from 1995 to 2015 that focused on visualizing spatial data on virtual or sand table interfaces. It was developed by HRED's Simulation and Training Technology Center (STTC) with Charles Amburn as the principal investigator. Collaborations involved with ARES included Dignitas Technologies, Design Interactive (DI), the University of Central Florida's Institute for Simulation and Training, and the U.S. Military Academy at West Point. ARES was largely designed to be a tangible user interface (TUI), in which digital information can be manipulated using physical objects such as a person's hand. It was constructed using commercial off-the-shelf components, including a projector, a laptop, an LCD monitor, Microsoft's Xbox Kinec" https://en.wikipedia.org/wiki/Router%20on%20a%20stick,"A router on a stick, also known as a one-armed router, is a router that has a single physical or logical connection to a network. It is a method of inter-VLAN routing where one router is connected to a switch via a single cable. The router has physical connections to the broadcast domains where one or more VLANs require the need for routing between them. Devices on separate VLANs or in a typical local area network are unable to communicate with each other. Therefore, it is often used to forward traffic between locally attached hosts on separate logical routing domains or to facilitate routing table administration, distribution and relay. Details One-armed routers that perform traffic forwarding are often implemented on VLANs. They use a single Ethernet network interface port that is part of two or more Virtual LANs, enabling them to be joined. A VLAN allows multiple virtual LANs to coexist on the same physical LAN. This means that two machines attached to the same switch cannot send Ethernet frames to each other even though they pass over the same wires. If they need to communicate, then a router must be placed between the two VLANs to forward packets, just as if the two LANs were physically isolated. The only difference is that the router in question may contain only a single Ethernet network interface controller (NIC) that is part of both VLANs. Hence, ""one-armed"". While uncommon, hosts on the same physical medium may be assigned with addresses and to different networks. A one-armed router could be assigned addresses for each network and be used to forward traffic between locally distinct networks and to remote networks through another gateway. One-armed routers are also used for administration purposes such as route collection, multi hop relay and looking glass servers. All traffic goes over the trunk twice, so the theoretical maximum sum of up and download speed is the line rate. For a two-armed configuration, uploading does not need to impact downloa" https://en.wikipedia.org/wiki/Polymath%20Project,"The Polymath Project is a collaboration among mathematicians to solve important and difficult mathematical problems by coordinating many mathematicians to communicate with each other on finding the best route to the solution. The project began in January 2009 on Timothy Gowers's blog when he posted a problem and asked his readers to post partial ideas and partial progress toward a solution. This experiment resulted in a new answer to a difficult problem, and since then the Polymath Project has grown to describe a particular crowdsourcing process of using an online collaboration to solve any math problem. Origin In January 2009, Gowers chose to start a social experiment on his blog by choosing an important unsolved mathematical problem and issuing an invitation for other people to help solve it collaboratively in the comments section of his blog. Along with the math problem itself, Gowers asked a question which was included in the title of his blog post, ""is massively collaborative mathematics possible?"" This post led to his creation of the Polymath Project. Projects for high school and college Since its inception, it has now sponsored a ""Crowdmath"" project in collaboration with MIT PRIMES program and the Art of Problem Solving. This project is built upon the same idea of the Polymath project that massive collaboration in mathematics is possible and possibly quite fruitful. However, this is specifically aimed at only high school and college students with a goal of creating ""a specific opportunity for the upcoming generation of math and science researchers."" The problems are original research and unsolved problems in mathematics. All high school and college students from around the world with advanced background of mathematics are encouraged to participate. Older participants are welcomed to participate as mentors and encouraged not to post solutions to the problems. The first Crowdmath project began on March 1, 2016. Problems solved Polymath1 The initial propose" https://en.wikipedia.org/wiki/Integrator,"An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output. Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest type and are still used for metering water flow or electrical power. Electronic analogue integrators are the basis of analog computers and charge amplifiers. Integration can also be performed by algorithms in digital computers. In signal processing circuits An electronic integrator is a form of first-order low-pass filter, which can be performed in the continuous-time (analog) domain or approximated (simulated) in the discrete-time (digital) domain. An integrator will have a low pass filtering effect but when given an offset it will accumulate a value building it until it reaches a limit of the system or overflows. A current integrator is an electronic device performing a time integration of an electric current, thus measuring a total electric charge. A capacitor's current–voltage relation makes it a very simple current integrator: More sophisticated current integrator circuits build on this relation, such as the charge amplifier. A current integrator is also used to measure the electric charge on a Faraday cup in a residual gas analyzer to measure partial pressures of gasses in a vacuum. Another application of current integration is in ion beam deposition, where the measured charge directly corresponds to the number of ions deposited on a substrate, assuming the charge state of the ions is known. The two current-carrying electrical leads must to be connected to the ion source and the substrate, closing the electric circuit which in part is given by the ion beam. A voltage integrator is an electronic device performing a time integration of an electric voltage, thus measuring the total volt-second product. A simple resistor–capa" https://en.wikipedia.org/wiki/Canadian%20Stem%20Cell%20Foundation,"The Canadian Stem Cell Foundation is an independent, non-profit organization established in 2008 and situated in Ottawa, Ontario. Stem Cell science is a Canadian innovation through the discovery of stem cells by Drs. James Till and Ernest McCulloch. It is globally known as the leading organization for stem cell research and support in the study of treatments and cures for diseases such as cancer, diabetes, blindness and stroke. The Canadian Stem Cell Strategy Their first strategy was created in 2013 to determine the concerns and actions required to develop an innovation that can advance stem cell research and clinics. The Canadian Stem Cell Foundation's goals are to invest a strategy for new treatments, sustainable healthcare, therapies and beneficial products. Their goals are beyond their capacity, such as ""using cells to treat respiratory heart diseases, restore lost vision, create a source of insulin-producing cells to treat diabetes, repair damaged spinal cords, reverse the effect of MS, Crohn's disease and other autoimmune disorders, reduce the ravages of Parkinson's disease and reverse tumour formation in the brain, breast and other solid tissues."" Their other goals are to bring together scientists, institutions, health charities, industry partners, regulators, funders and philanthropists in a universal vision in the developments of stem cell science research and have public and private sectors support in the funding for stem cell research in the long-term. There are many organizations involved such as the Stem Cell Network, Health Charities Coalition of Canada, Ontario Stem Cell Initiative, Centre for Commercialization of Regenerative Medicine, Ontario Bioscience Innovation Organization, and Cell CAN Regenerative Medicine and Cell Therapy Network. To follow updates regarding ""The Canadian Stem Cell Strategy,"" visit the site: http://www.stemcellfoundation.ca/en/blog/categories/listings/strategy-updates The Stem Cell Charter The Stem Cell Charter is an" https://en.wikipedia.org/wiki/Entrance%20facility,"In telecommunications, Entrance facility refers to the entrance to a building for both public and private network service cables (including antenna transmission lines, where applicable), including the entrance point at the building wall or floor, and continuing to the entrance room or entrance space. Entrance facilities are the transmission facilities (typically wires or cables) that connect competitive LECs’ networks with incumbent LECs’ networks. Computer networking" https://en.wikipedia.org/wiki/Transmission-line%20pulse,"Transmission-Line Pulse (TLP) is a way to study integrated circuit technologies and circuit behavior in the current and time domain of electrostatic-discharge (ESD) events. The concept was described shortly after WWII in pp. 175–189 of Pulse Generators, Vol. 5 of the MIT Radiation Lab Series. Also, D. Bradley, J. Higgins, M. Key, and S. Majumdar realized a TLP-based laser-triggered spark gap for kilovolt pulses of accurately variable timing in 1969. For investigation of ESD and electrical-overstress (EOS) effects a measurement system using a TLP generator has been introduced first by T. Maloney and N. Khurana in 1985. Since then, the technique has become indispensable for integrated circuit ESD protection development. The TLP technique is based on charging a long, floating cable to a pre-determined voltage, and discharging it into a Device-Under-Test (DUT). The cable discharge emulates an electro-static discharge event, but employing time-domain reflectometry (TDR), the change in DUT impedance can be monitored as a function of time. The first commercial TLP system was developed by Barth Electronics in 1990s. Since then, other commercial systems have been developed (e.g., by Thermo Fisher Scientific, Grundtech, ESDEMC Technology, High Power Pulse Instruments, Hanwa, TLPsol). A subset of TLP, VF-TLP (Very-Fast Transmission-Line Pulsing), has lately gained popularity with its improved resolution and bandwidth for analysis of ephemeral ESD events such as CDM (Charged Device Model) events. Pioneered by academia (University of Illinois) and commercialized by Barth Electronics, VF-TLP has become an important ESD analysis tool for analyzing modern high-speed semiconductor circuits. TLP Standards ANSI/ESD STM5.5.1-2016 Electrostatic Discharge Sensitivity Testing – Transmission Line Pulse (TLP) – Component Level ANSI/ESD SP5.5.2-2007 Electrostatic Discharge Sensitivity Testing - Very Fast Transmission Line Pulse (VF-TLP) - Component Level IEC 62615:2010 Electros" https://en.wikipedia.org/wiki/Optomyography,"Optomyography (OMG) was proposed in 2015 as a technique that could be used to monitor muscular activity. It is possible to use OMG for the same applications where Electromyography (EMG) and Mechanomyography (MMG) are used. However, OMG offers superior signal-to-noise ratio and improved robustness against the disturbing factors and limitations of EMG and MMG. The basic principle of OMG is to use active near-infra-red optical sensors to measure the variations in the measured signals that are reflected from the surface of the skin while activating the muscles below and around the skin spot where the photoelectric sensor is focusing to measure the signals reflected from this spot. Applications A glasses based optomyography device was patented for measuring facial expressions and emotional responses particularly for mental health monitoring . Generating proper control signals is the most important task to be able to control any kind of a prosthesis, computer game or any other system which contains a human-computer interaction unit or module. For this purpose, surface-Electromyographic (s-EMG) and Mechanomyographic (MMG) signals are measured during muscular activities and used, not only for monitoring and assessing these activities, but also to help in providing efficient rehabilitation treatment for patients with disabilities as well as in constructing and controlling sophisticated prostheses for various types of amputees and disabilities. However, while the existing s-EMG and MMG based systems have compelling benefits, many engineering challenges still remain unsolved, especially with regard to the sensory control system." https://en.wikipedia.org/wiki/Garden%20waste%20dumping,"Garden waste, or green waste dumping is the act of discarding or depositing garden waste somewhere it does not belong. Garden waste is the accumulated plant matter from gardening activities which involve cutting or removing vegetation, i.e. cutting the lawn, weed removal, hedge trimming or pruning consisting of lawn clippings. leaf matter, wood and soil. The composition and volume of garden waste can vary from season to season and location to location. A study in Aarhus, Denmark, found that on average, garden waste generation per person ranged between 122 kg to 155 kg per year. Garden waste may be used to create compost or mulch, which can be used as a soil conditioner, adding valuable nutrients and building humus. The creation of compost requires a balance between, nitrogen, carbon, moisture and oxygen. Without the ideal balance, plant matter may take a long time to break down, drawing nitrogen from other sources, reducing nitrogen availability to existing vegetation which requires it for growth. The risk of dumping garden waste is that it may contain seeds and plant parts that may grow (propagules), as well as increase fire fuel loads, disrupt visual amenity, accrue economic costs associated with the removal of waste as well as costs associated with the mitigation of associated impacts such as weed control, forest fire. Cause There are strong links between weed invasion of natural areas and the proximity and density of housing. The size and duration of the community have a direct relation to the density of weed infestation. Of the various means in which migration of exotic species from gardens take place, such as vegetative dispersal of runners, wind born and fallen seed, garden waste dumping can play a significant role. The results of one North German study found that of the problematic population of Fallopia, app. 29% originated from garden waste. Of a population of Heracleum mantegazzianum, 18% was found by Schepker to be generated by garden waste (as" https://en.wikipedia.org/wiki/Wideband%20audio,"Wideband audio, also known as wideband voice or HD voice, is high definition voice quality for telephony audio, contrasted with standard digital telephony ""toll quality"". It extends the frequency range of audio signals transmitted over telephone lines, resulting in higher quality speech. The range of the human voice extends from 100 Hz to 17 kHz but traditional, voiceband or narrowband telephone calls limit audio frequencies to the range of 300 Hz to 3.4 kHz. Wideband audio relaxes the bandwidth limitation and transmits in the audio frequency range of 50 Hz to 7 kHz. In addition, some wideband codecs may use a higher audio bit depth of 16 bits to encode samples, also resulting in much better voice quality. Wideband codecs have a typical sample rate of 16 kHz. For superwideband codecs the typical value is 32 kHz. History In 1987, the International Telecommunication Union (ITU) standardized a version of wideband audio known as G.722. Radio broadcasters began using G.722 over Integrated Services Digital Network (ISDN) to provide high-quality audio for remote broadcasts, such as commentary from sports venues. AMR-WB (G.722.2) was developed by Nokia and VoiceAge and it was first specified by 3GPP. The traditional telephone network (PSTN) is generally limited to narrowband audio by the intrinsic nature of its transmission technology, TDM (time-division multiplexing), and by the analogue-to-digital converters used at the edge of the network, as well as the speakers, microphones and other elements in the endpoints themselves. Wideband audio has been broadly deployed in conjunction with videoconferencing. Providers of this technology quickly discovered that despite the explicit emphasis on video transmission, the quality of the participant experience was significantly influenced by the fidelity of the associated audio signal. Communications via Voice over Internet Protocol (VoIP) can readily employ wideband audio. When PC-to-PC calls are placed via VoIP services, such " https://en.wikipedia.org/wiki/Archaeobiology,"Archaeobiology, the study of the biology of ancient times through archaeological materials, is a subspecialty of archaeology. It can be seen as a blanket term for paleobotany, animal osteology, zooarchaeology, microbiology, and many other sub-disciplines. Specifically, plant and animal remains are also called ecofacts. Sometimes these ecofacts can be left by humans and sometimes they can be naturally occurring. Archaeobiology tends to focus on more recent finds, so the difference between archaeobiology and palaeontology is mainly one of date: archaeobiologists typically work with more recent, non-fossilised material found at archaeological sites. Only very rarely are archaeobiological excavations performed at sites with no sign of human presence. Flora and Fauna in Archaeology The prime interest of paleobotany is to reconstruct the vegetation that people in the past would have encountered in a particular place and time. Plant studies have always been overshadowed by faunal studies because bones are more conspicuous than plant remains when excavating. Collection of plant remains could everything including pollen, soil, diatoms, wood, plant remains and phytoliths. Phytoliths are sediments and diatoms are water deposits. Each plant remain can tell the archaeologist different things about the environment during a certain time period. Animal remains were the first evidence used by 19th century archaeologists. Today, archaeologists use faunal remains as a guide to the environment. It helps archaeologists understand whether the fauna were present naturally or through activities of carnivores or people. Archaeologists deal with macrofauna and microfauna. Microfauna are better indicators of climate and environmental change than larger species. These can be as small as a bug or as big as a fish or bird. Macrofauna helps archaeologists build a picture of past human diet. Bacteria and Protists in Archaeology Bacteria and Protists form two separate kingdoms, but both are fa" https://en.wikipedia.org/wiki/Biofact%20%28biology%29,"In biology, a biofact is dead material of a once-living organism. In 1943, the protozoologist Bruno M. Klein of Vienna (1891–1968) coined the term in his article Biofakt und Artefakt in the microscopy journal Mikrokosmos, though at that time it was not adopted by the scientific community. Klein's concept of biofact stressed the dead materials produced by living organisms as sheaths, such as shells. The word biofact is now widely used in the zoo/aquarium world, but was first used by Lisbeth Bornhofft in 1993 in the Education Department at the New England Aquarium, Boston, to refer to preserved items such as animal bones, skins, molts and eggs. The Accreditation Standards and Related Policies of the Association of Zoos and Aquariums states that biofacts can be useful education tools, and are preferable to live animals because of potential ethical considerations. See also Biofact (archaeology)" https://en.wikipedia.org/wiki/Fingerprint%20scanner,"Fingerprint scanners are security systems of biometrics. They are used in police stations, security industries, smartphones, and other mobile devices. Fingerprints People have patterns of friction ridges on their fingers, these patterns are called the fingerprints. Fingerprints are uniquely detailed, durable over an individual's lifetime, and difficult to alter. Due to the unique combinations, fingerprints have become an ideal means of identification. Types of fingerprint scanners There are four types of fingerprint scanners: optical scanners, capacitance scanners, ultrasonic scanners, and thermal scanners. The basic function of every type of scanner is to obtain an image of a person's fingerprint and find a match for it in its database. The measure of the fingerprint image quality is in dots per inch (DPI). Optical scanners take a visual image of the fingerprint using a digital camera. Capacitive or CMOS scanners use capacitors and thus electric current to form an image of the fingerprint. This type of scanner tends to excel in terms of precision. Ultrasonic fingerprint scanners use high frequency sound waves to penetrate the epidermal (outer) layer of the skin. Thermal scanners sense the temperature differences on the contact surface, in between fingerprint ridges and valleys. All fingerprint scanners are susceptible to be fooled by a technique that involves photographing fingerprints, processing the photographs using special software, and printing fingerprint replicas using a 3D printer. Construction forms There are two construction forms: the stagnant and the moving fingerprint scanner. Stagnant: The finger must be dragged over the small scanning area. This is cheaper and less reliable than the moving form. Imaging can be less than ideal when the finger is not dragged over the scanning area at constant speed. Moving: The finger lies on the scanning area while the scanner runs underneath. Because the scanner moves at constant speed over the fingerpri" https://en.wikipedia.org/wiki/Negative-bias%20temperature%20instability,"Negative-bias temperature instability (NBTI) is a key reliability issue in MOSFETs, a type of transistor aging. NBTI manifests as an increase in the threshold voltage and consequent decrease in drain current and transconductance of a MOSFET. The degradation is often approximated by a power-law dependence on time. It is of immediate concern in p-channel MOS devices (pMOS), since they almost always operate with negative gate-to-source voltage; however, the very same mechanism also affects nMOS transistors when biased in the accumulation regime, i.e. with a negative bias applied to the gate. More specifically, over time positive charges become trapped at the oxide-semiconductor boundary underneath the gate of a MOSFET. These positive charges partially cancel the negative gate voltage without contributing to conduction through the channel as electron holes in the semiconductor are supposed to. When the gate voltage is removed, the trapped charges dissipate over a time scale of milliseconds to hours. The problem has become more acute as transistors have shrunk, as there is less averaging of the effect over a large gate area. Thus, different transistors experience different amounts of NBTI, defeating standard circuit design techniques for tolerating manufacturing variability which depend on the close matching of adjacent transistors. NBTI has become significant for portable electronics because it interacts badly with two common power-saving techniques: reduced operating voltages and clock gating. With lower operating voltages, the NBTI-induced threshold voltage change is a larger fraction of the logic voltage and disrupts operations. When a clock is gated off, transistors stop switching and NBTI effects accumulate much more rapidly. When the clock is re-enabled, the transistor thresholds have changed and the circuit may not operate. Some low-power designs switch to a low-frequency clock rather than stopping completely in order to mitigate NBTI effects. Physics " https://en.wikipedia.org/wiki/Biological%20systems%20engineering,"Biological systems engineering or Biosystems engineering is a broad-based engineering discipline with particular emphasis on non-medical biology. It can be thought of as a subset of the broader notion of biological engineering or bio-technology though not in the respects that pertain to biomedical engineering as biosystems engineering tends to focus less on medical applications than on agriculture, ecosystems, and food science. The discipline focuses broadly on environmentally sound and sustainable engineering solutions to meet societies' ecologically related needs. Biosystems engineering integrates the expertise of fundamental engineering fields with expertise from non-engineering disciplines. Background and organization Many college and university biological engineering departments have a history of being grounded in agricultural engineering and have only in the past two decades or so changed their names to reflect the movement towards more diverse biological based engineering programs. This major is sometimes called agricultural and biological engineering, biological and environmental engineering, etc., in different universities, generally reflecting interests of local employment opportunities. Since biological engineering covers a wide spectrum, many departments now offer specialization options. Depending on the department and the specialization options offered within each program, curricula may overlap with other related fields. There are a number of different titles for BSE-related departments at various universities. The professional societies commonly associated with many Biological Engineering programs include the American Society of Agricultural and Biological Engineers (ASABE) and the Institute of Biological Engineering (IBE), which generally encompasses BSE. Some program also participate in the Biomedical Engineering Society (BMES) and the American Institute of Chemical Engineers (AIChE). A biological systems engineer has a background in what bot" https://en.wikipedia.org/wiki/Host%20%28biology%29,"In biology and medicine, a host is a larger organism that harbours a smaller organism; whether a parasitic, a mutualistic, or a commensalist guest (symbiont). The guest is typically provided with nourishment and shelter. Examples include animals playing host to parasitic worms (e.g. nematodes), cells harbouring pathogenic (disease-causing) viruses, or a bean plant hosting mutualistic (helpful) nitrogen-fixing bacteria. More specifically in botany, a host plant supplies food resources to micropredators, which have an evolutionarily stable relationship with their hosts similar to ectoparasitism. The host range is the collection of hosts that an organism can use as a partner. Symbiosis Symbiosis spans a wide variety of possible relationships between organisms, differing in their permanence and their effects on the two parties. If one of the partners in an association is much larger than the other, it is generally known as the host. In parasitism, the parasite benefits at the host's expense. In commensalism, the two live together without harming each other, while in mutualism, both parties benefit. Most parasites are only parasitic for part of their life cycle. By comparing parasites with their closest free-living relatives, parasitism has been shown to have evolved on at least 233 separate occasions. Some organisms live in close association with a host and only become parasitic when environmental conditions deteriorate. A parasite may have a long-term relationship with its host, as is the case with all endoparasites. The guest seeks out the host and obtains food or another service from it, but does not usually kill it. In contrast, a parasitoid spends a large part of its life within or on a single host, ultimately causing the host's death, with some of the strategies involved verging on predation. Generally, the host is kept alive until the parasitoid is fully grown and ready to pass on to its next life stage. A guest's relationship with its host may be intermitten" https://en.wikipedia.org/wiki/Li-Fi%20Consortium,"The Li-Fi Consortium is an international organization focusing on optical wireless technologies. It was founded by four technology-based organizations in October 2011. The goal of the Li-Fi Consortium is to foster the development and distribution of (Li-Fi) optical wireless technologies such as communication, navigation, natural user interfaces and others. Status the Li-Fi Consortium outlined a roadmap for different types of optical communication such as gigabit-class communication as well as a full featured Li-Fi cloud which includes many more besides wireless infrared and visible light communication." https://en.wikipedia.org/wiki/Isochore%20%28genetics%29,"In genetics, an isochore is a large region of genomic DNA (greater than 300 kilobases) with a high degree of uniformity in GC content; that is, guanine (G) and cytosine (C) bases. The distribution of bases within a genome is non-random: different regions of the genome have different amounts of G-C base pairs, such that regions can be classified and identified by the proportion of G-C base pairs they contain. Bernardi and colleagues first noticed the compositional non-uniformity of vertebrate genomes using thermal melting and density gradient centrifugation. The DNA fragments extracted by the gradient centrifugation were later termed ""isochores"", which was subsequently defined as ""very long (much greater than 200 KB) DNA segments"" that ""are fairly homogeneous in base composition and belong to a small number of major classes distinguished by differences in guanine-cytosine (GC) content"". Subsequently, the isochores ""grew"" and were claimed to be "">300 kb in size."" The theory proposed that the isochore composition of genomes varies markedly between ""warm-blooded"" (homeotherm) vertebrates and ""cold-blooded"" (poikilotherm) vertebrates and later became known as the isochore theory. The thermodynamic stability hypothesis The isochore theory purported that the genome of ""warm-blooded"" vertebrates (mammals and birds) are mosaics of long isochoric regions of alternating GC-poor and GC-rich composition, as opposed to the genome of ""cold-blooded"" vertebrates (fishes and amphibians) that were supposed to lack GC-rich isochores. These findings were explained by the thermodynamic stability hypothesis, attributing genomic structure to body temperature. GC-rich isochores were purported to be a form of adaptation to environmental pressures, as an increase in genomic GC-content could protect DNA, RNA, and proteins from degradation by heat. Despite its attractive simplicity, the thermodynamic stability hypothesis has been repeatedly shown to be in error . Many authors show" https://en.wikipedia.org/wiki/Pipeline%20forwarding,"Pipeline forwarding (PF) applies to packet forwarding in computer networks the basic concept of pipelining, which has been widely and successfully used in computing — specifically, in the architecture of all major central processing units (CPUs) — and manufacturing — specifically in assembly lines of various industries starting from automotive to many others. Pipelining is known to be optimal independent of the specific instantiation. In particular, PF is optimal from various points of view: High efficiency in utilization of network resources, which enables accommodating a larger amount of traffic on the network, thus lowering operation cost and being the foundation for accommodating the exponential growth of modern networks. Low implementation complexity, which enables the realization of larger and more powerful networking systems at low cost, thus offering further support to network growth. High scalability, which is an immediate consequence of the above two features. Deterministic and predictable operation with minimum delay and no packet loss even under full load condition, which is key in supporting the demanding requirements of the new and valuable services that are being deployed, or envisioned to be deployed, on modern networks, such as telephony, videoconferencing, virtual presence, video on demand, distributed gaming. Various aspects of the technology are covered by several patents issued by both the United States Patent and Trademark Office and the European Patent Office. Operating principles As in other pipelining implementations, a common time reference (CTR) is needed to perform pipeline forwarding. In the context of global networks the CTR can be effectively realized by using UTC (coordinated universal time) that is globally available via GPS (global positioning system) or Galileo in the near future. For example, the UTC second is divided into fixed duration time frames, which are grouped into time cycles so that in each UTC second there is a" https://en.wikipedia.org/wiki/Wireless%20engineering,"Wireless Engineering is the branch of engineering which addresses the design, application, and research of wireless communication systems and technologies. Overview Wireless engineering is an engineering subject dealing with engineering problems using wireless technology such as radio communications and radar, but it is more general than the conventional radio engineering. It may include using other techniques such as acoustic, infrared, and optical technologies. History Wireless technologies have skyrocketed since their late 19th Century advancements. With the invention of the FM radio in 1935, wireless communications have become a concentrated focus of both private and government sectors. In Education Auburn University's Samuel Ginn College of Engineering was the first in the United States to offer a formalized undergraduate degree in such a field. The program was initiated by Samuel Ginn, an Auburn Alumni, in 2001. Auburn University's college of engineering divides their wireless engineering program into two areas of applicable study: electrical engineering (pertaining to circuit design, digital signal processing, antenna design, etc.), and software-oriented wireless engineering (communications networks, mobile platform applications, systems software, etc.) Macquarie University in Sydney, was the first University to offer Wireless Engineering in Australia. The university works closely with nearby industries in research and teaching development in wireless engineering. Universiti Teknikal Malaysia Melaka in Malacca, was the first University to offer Wireless Communication Engineering in Malaysia. Applications Wireless engineering contains a wide spectrum of application, most notably cellular networks. The recent popularity of cellular networks has created a vast career demand with a large repository. The popularity has also sparked many wireless innovations, such as increased network capacity, 3G, cryptology and network security technologies." https://en.wikipedia.org/wiki/FI6%20%28antibody%29,"FI6 is an antibody that targets a protein found on the surface of all influenza A viruses called hemagglutinin. FI6 is the only known antibody found to bind all 16 subtypes of the influenza A virus hemagglutinin and is hoped to be useful for a universal influenza virus therapy. The antibody binds to the F domain HA trimer, and prevents the virus from attaching to the host cell. The antibody has been refined in order to remove any excess, unstable mutations that could negatively affect its neutralising ability, and this new version of the antibody has been termed ""FI6v3"" Research Researchers from Britain and Switzerland have previously found antibodies that work in Group 1 influenza A viruses or against most Group 2 viruses (CR8020), but not against both. This team developed a method using single-cell screening to test very large numbers of human plasma cells, to increase their odds of finding an antibody even if it was extremely rare. When they identified FI6, they injected it into mice and ferrets and found that it protected the animals against infection by either a Group 1 or Group 2 influenza A virus. Scientists screened 104,000 peripheral-blood plasma cells from eight recently infected or vaccinated donors for antibodies that recognize each of three diverse influenza strains: H1N1 (swine-origin) and H5N1 and H7N7 (highly pathogenic avian influenzas.) From one donor, they isolated four plasma cells that produced an identical antibody, which they called FI6. This antibody binds all 16 HA subtypes, neutralizes infection, and protects mice and ferrets from lethal infection. The most broadly reactive antibodies that had previously been discovered recognized either one group of HA subtypes or the other, highlighting how remarkable FI6 is in its ability to target the gamut of influenza subtypes. Clinical implication Researchers determined the crystal structure of the FI6 antibody when it was bound to H1 and H3 HA proteins. Sitting atop the HA spike is a globular h" https://en.wikipedia.org/wiki/Mikroelektronika,"MikroElektronika (stylized as MikroE) is a Serbian manufacturer and retailer of hardware and software tools for developing embedded systems. The company headquarters is in Belgrade, Serbia. Its best known software products are mikroC, mikroBasic and mikroPascal compilers for programming microcontrollers. Its flagship hardware product line is Click boards, a range of more than 550 add-on boards for interfacing microcontrollers with peripheral sensors or transceivers. These boards conform to mikroBUS – a standard conceived by MikroElektronika and later endorsed by NXP Semiconductors and Microchip Technology, among others. MikroElektronika is also known for Hexiwear, an Internet of things development kit developed in partnership with NXP Semiconductors. History Serbian entrepreneur – and current company owner and CEO – Nebojša Matić started publishing an electronics magazine called ""MikroElektronika"" in 1997. In 2001, the magazine was shut down and MikroElektronika repositioned itself as a company focused on producing development boards for microcontrollers and publishing books for developing embedded systems. The company started offering compilers in 2004, with the release of mikroPascal for PIC and mikroBasic for PIC – compilers for programming 8-bit microcontrollers from Microchip Technology. Between 2004 and 2015 the company released C, Basic and Pascal compilers for seven microcontroller architectures: PIC, PIC32, dsPIC/PIC24, FT90x, AVR, 8051, and ARM® (supporting STMicroelectronics, Texas Instruments and Microchip-based ARM® Cortex microcontrollers). In conjunction with compilers, MikroElektronika kept its focus on producing development boards while gradually ceasing its publishing activities. Its current generation of the ""Easy"" boards brand was released in 2012. One of the flagship models, EasyPIC Fusion v7 was nominated for best tool at the Embedded World 2013 exhibition in Nurembeg, an important embedded systems industry gathering. Other product lines we" https://en.wikipedia.org/wiki/Discrete%20system,"In theoretical computer science, a discrete system is a system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite-state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals. See also Digital control Finite-state machine Frequency spectrum Mathematical model Sample and hold Sample rate Sample time Z-transform" https://en.wikipedia.org/wiki/Process%20corners,"In semiconductor manufacturing, a process corner is an example of a design-of-experiments (DoE) technique that refers to a variation of fabrication parameters used in applying an integrated circuit design to a semiconductor wafer. Process corners represent the extremes of these parameter variations within which a circuit that has been etched onto the wafer must function correctly. A circuit running on devices fabricated at these process corners may run slower or faster than specified and at lower or higher temperatures and voltages, but if the circuit does not function at all at any of these process extremes the design is considered to have inadequate design margin. To verify the robustness of an integrated circuit design, semiconductor manufacturers will fabricate corner lots, which are groups of wafers that have had process parameters adjusted according to these extremes, and will then test the devices made from these special wafers at varying increments of environmental conditions, such as voltage, clock frequency, and temperature, applied in combination (two or sometimes all three together) in a process called characterization. The results of these tests are plotted using a graphing technique known as a shmoo plot that indicates clearly the boundary limit beyond which a device begins to fail for a given combination of these environmental conditions. Corner-lot analysis is most effective in digital electronics because of the direct effect of process variations on the speed of transistor switching during transitions from one logic state to another, which is not relevant for analog circuits, such as amplifiers. Significance to digital electronics In Very-Large-Scale Integration (VLSI) integrated circuit microprocessor design and semiconductor fabrication, a process corner represents a three or six sigma variation from nominal doping concentrations (and other parameters) in transistors on a silicon wafer. This variation can cause significant changes in the dut" https://en.wikipedia.org/wiki/Circuit%20extraction,"The electric circuit extraction or simply circuit extraction, also netlist extraction, is the translation of an integrated circuit layout back into the electrical circuit (netlist) it is intended to represent. This extracted circuit is needed for various purposes including circuit simulation, static timing analysis, signal integrity, power analysis and optimization, and logic to layout comparison. Each of these functions require a slightly different representation of the circuit, resulting in the need for multiple layout extractions. In addition, there may be a postprocessing step of converting the device-level circuit into a purely digital circuit, but this is not considered part of the extraction process. The detailed functionality of an extraction process will depend on its system environment. The simplest form of extracted circuit may be in the form of a netlist, which is formatted for a particular simulator or analysis program. A more complex extraction may involve writing the extracted circuit back into the original database containing the physical layout and the logic diagram. In this case, by associating the extracted circuit with the layout and the logic network, the user can cross-reference any point in the circuit to its equivalent points in the logic and layout (cross-probing). For simulation or analysis, various formats of netlist can then be generated using programs that read the database and generate the appropriate text information. In extraction, it is often helpful to make an (informal) distinction between designed devices, which are devices that are deliberately created by the designer, and parasitic devices, which were not explicitly intended by the designer but are inherent in the layout of the circuit. Primarily there are three different parts to the extraction process. These are designed device extraction, interconnect extraction, and parasitic device extraction. These parts are inter-related since various device extractions can change th" https://en.wikipedia.org/wiki/Sessility%20%28motility%29,"Sessility is the biological property of an organism describing its lack of a means of self-locomotion. Sessile organisms for which natural motility is absent are normally immobile. This is distinct from the botanical concept of sessility, which refers to an organism or biological structure attached directly by its base without a stalk. Sessile organisms can move via external forces (such as water currents), but are usually permanently attached to something. Organisms such as corals lay down their own substrate from which they grow. Other sessile organisms grow from a solid object, such as a rock, a dead tree trunk, or a man-made object such as a buoy or ship's hull. Mobility Sessile animals typically have a motile phase in their development. Sponges have a motile larval stage and become sessile at maturity. Conversely, many jellyfish develop as sessile polyps early in their life cycle. In the case of the cochineal, it is in the nymph stage (also called the crawler stage) that the cochineal disperses. The juveniles move to a feeding spot and produce long wax filaments. Later they move to the edge of the cactus pad where the wind catches the wax filaments and carries the tiny larval cochineals to a new host. Reproduction Many sessile animals, including sponges, corals and hydra, are capable of asexual reproduction in situ by the process of budding. Sessile organisms such as barnacles and tunicates need some mechanism to move their young into new territory. This is why the most widely accepted theory explaining the evolution of a larval stage is the need for long-distance dispersal ability. Biologist Wayne Sousa's 1979 study in intertidal disturbance added support for the theory of nonequilibrium community structure, ""suggesting that open space is necessary for the maintenance of diversity in most communities of sessile organisms"". Clumping Clumping is a behavior in sessile organisms in which individuals of a particular species group closely to one another for ben" https://en.wikipedia.org/wiki/List%20of%20multivariable%20calculus%20topics,"This is a list of multivariable calculus topics. See also multivariable calculus, vector calculus, list of real analysis topics, list of calculus topics. Closed and exact differential forms Contact (mathematics) Contour integral Contour line Critical point (mathematics) Curl (mathematics) Current (mathematics) Curvature Curvilinear coordinates Del Differential form Differential operator Directional derivative Divergence Divergence theorem Double integral Equipotential surface Euler's theorem on homogeneous functions Exterior derivative Flux Frenet–Serret formulas Gauss's law Gradient Green's theorem Green's identities Harmonic function Helmholtz decomposition Hessian matrix Hodge star operator Inverse function theorem Irrotational vector field Isoperimetry Jacobian matrix Lagrange multiplier Lamellar vector field Laplacian Laplacian vector field Level set Line integral Matrix calculus Mixed derivatives Monkey saddle Multiple integral Newtonian potential Parametric equation Parametric surface Partial derivative Partial differential equation Potential Real coordinate space Saddle point Scalar field Solenoidal vector field Stokes' theorem Submersion Surface integral Symmetry of second derivatives Taylor's theorem Total derivative Vector field Vector operator Vector potential list Mathematics-related lists Outlines of mathematics and logic Outlines" https://en.wikipedia.org/wiki/Lee%20algorithm,"The Lee algorithm is one possible solution for maze routing problems based on breadth-first search. It always gives an optimal solution, if one exists, but is slow and requires considerable memory. Algorithm 1) Initialization - Select start point, mark with 0 - i := 0 2) Wave expansion - REPEAT - Mark all unlabeled neighbors of points marked with i with i+1 - i := i+1 UNTIL ((target reached) or (no points can be marked)) 3) Backtrace - go to the target point REPEAT - go to next node that has a lower mark than the current node - add this node to path UNTIL (start point reached) 4) Clearance - Block the path for future wirings - Delete all marks Of course the wave expansion marks only points in the routable area of the chip, not in the blocks or already wired parts, and to minimize segmentation you should keep in one direction as long as possible. External links http://www.eecs.northwestern.edu/~haizhou/357/lec6.pdf" https://en.wikipedia.org/wiki/On-Line%20Encyclopedia%20of%20Integer%20Sequences,"The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to the OEIS Foundation in 2009. Sloane is the chairman of the OEIS Foundation. OEIS records information on integer sequences of interest to both professional and amateur mathematicians, and is widely cited. , it contains over 360,000 sequences, making it the largest database of its kind. Each entry contains the leading terms of the sequence, keywords, mathematical motivations, literature links, and more, including the option to generate a graph or play a musical representation of the sequence. The database is searchable by keyword, by subsequence, or by any of 16 fields. History Neil Sloane started collecting integer sequences as a graduate student in 1964 to support his work in combinatorics. The database was at first stored on punched cards. He published selections from the database in book form twice: A Handbook of Integer Sequences (1973, ), containing 2,372 sequences in lexicographic order and assigned numbers from 1 to 2372. The Encyclopedia of Integer Sequences with Simon Plouffe (1995, ), containing 5,488 sequences and assigned M-numbers from M0000 to M5487. The Encyclopedia includes the references to the corresponding sequences (which may differ in their few initial terms) in A Handbook of Integer Sequences as N-numbers from N0001 to N2372 (instead of 1 to 2372.) The Encyclopedia includes the A-numbers that are used in the OEIS, whereas the Handbook did not. These books were well received and, especially after the second publication, mathematicians supplied Sloane with a steady flow of new sequences. The collection became unmanageable in book form, and when the database had reached 16,000 entries Sloane decided to go online—first as an email service (August 1994), and soon after as a website (1996). As a spin-off fro" https://en.wikipedia.org/wiki/EDA%20database,"An EDA database is a database specialized for the purpose of electronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications. In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such as parameterized cells and user scripts). On top of the database are built the algorithmic engines within the tool (such as timing, placement, routing, or simulation engines ), and the highest level represents the applications built from these component blocks, such as floorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such as Verilog and GDSII. Mature design databases Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies. IBM, Hewlett-Packard, SDA Systems and ECAD (now Cadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pur" https://en.wikipedia.org/wiki/Aliasing,"In signal processing and related disciplines, aliasing is the overlapping of frequency components resulting from a sample rate below the Nyquist rate. This overlap results in distortion or artifacts when the signal is reconstructed from samples which causes the reconstructed signal to differ from the original continuous signal. Aliasing that occurs in signals sampled in time, for instance in digital audio or the stroboscopic effect, is referred to as temporal aliasing. Aliasing in spatially sampled signals (e.g., moiré patterns in digital images) is referred to as spatial aliasing. Aliasing is generally avoided by applying low-pass filters or anti-aliasing filters (AAF) to the input signal before sampling and when converting a signal from a higher to a lower sampling rate. Suitable reconstruction filtering should then be used when restoring the sampled signal to the continuous domain or converting a signal from a lower to a higher sampling rate. For spatial anti-aliasing, the types of anti-aliasing include fast approximate anti-aliasing (FXAA), multisample anti-aliasing, and supersampling. Description When a digital image is viewed, a reconstruction is performed by a display or printer device, and by the eyes and the brain. If the image data is processed in some way during sampling or reconstruction, the reconstructed image will differ from the original image, and an alias is seen. An example of spatial aliasing is the moiré pattern observed in a poorly pixelized image of a brick wall. Spatial anti-aliasing techniques avoid such poor pixelizations. Aliasing can be caused either by the sampling stage or the reconstruction stage; these may be distinguished by calling sampling aliasing prealiasing and reconstruction aliasing postaliasing. Temporal aliasing is a major concern in the sampling of video and audio signals. Music, for instance, may contain high-frequency components that are inaudible to humans. If a piece of music is sampled at 32,000 samples per sec" https://en.wikipedia.org/wiki/European%20Society%20for%20Mathematics%20and%20the%20Arts,"European Society for Mathematics and the Arts (ESMA) is a European society to promoting mathematics and the arts. The first Conference of ESMA, took place in July 2010 at the Institute Henri Poincaré in Paris." https://en.wikipedia.org/wiki/Output%20compare,"Output compare is the ability to trigger an output based on a timestamp in memory, without interrupting the execution of code by a processor or microcontroller. This is a functionality provided by many embedded systems. The corresponding ability to record a timestamp in memory when an input occurs is called input capture. Embedded systems Microchip Documentation on Output Compare: DS39706A-page 16-1 - Section 16. Output Compare http://ww1.microchip.com/downloads/en/DeviceDoc/39706a.pdf" https://en.wikipedia.org/wiki/Argument%20%28complex%20analysis%29,"In mathematics (particularly in complex analysis), the argument of a complex number z, denoted arg(z), is the angle between the positive real axis and the line joining the origin and z, represented as a point in the complex plane, shown as in Figure 1. It is a multivalued function operating on the nonzero complex numbers. To define a single-valued function, the principal value of the argument (sometimes denoted Arg z) is used. It is often chosen to be the unique value of the argument that lies within the interval . Definition An argument of the complex number , denoted , is defined in two equivalent ways: Geometrically, in the complex plane, as the 2D polar angle from the positive real axis to the vector representing . The numeric value is given by the angle in radians, and is positive if measured counterclockwise. Algebraically, as any real quantity such that for some positive real (see Euler's formula). The quantity is the modulus (or absolute value) of , denoted ||: The names magnitude, for the modulus, and phase, for the argument, are sometimes used equivalently. Under both definitions, it can be seen that the argument of any non-zero complex number has many possible values: firstly, as a geometrical angle, it is clear that whole circle rotations do not change the point, so angles differing by an integer multiple of radians (a complete circle) are the same, as reflected by figure 2 on the right. Similarly, from the periodicity of and , the second definition also has this property. The argument of zero is usually left undefined. Alternative definition The complex argument can also be defined algebraically in terms of complex roots as: This definition removes reliance on other difficult-to-compute functions such as arctangent as well as eliminating the need for the piecewise definition. Because it's defined in terms of roots, it also inherits the principal branch of square root as its own principal branch. The normalization of by dividing by is" https://en.wikipedia.org/wiki/Deconvolution,"In mathematics, deconvolution is the operation inverse to convolution. Both operations are used in signal processing and image processing. For example, it may be possible to recover the original signal after a filter (convolution) by using a deconvolution method with a certain degree of accuracy. Due to the measurement error of the recorded signal or image, it can be demonstrated that the worse the signal-to-noise ratio (SNR), the worse the reversing of a filter will be; hence, inverting a filter is not always a good solution as the error amplifies. Deconvolution offers a solution to this problem. The foundations for deconvolution and time-series analysis were largely laid by Norbert Wiener of the Massachusetts Institute of Technology in his book Extrapolation, Interpolation, and Smoothing of Stationary Time Series (1949). The book was based on work Wiener had done during World War II but that had been classified at the time. Some of the early attempts to apply these theories were in the fields of weather forecasting and economics. Description In general, the objective of deconvolution is to find the solution f of a convolution equation of the form: Usually, h is some recorded signal, and f is some signal that we wish to recover, but has been convolved with a filter or distortion function g, before we recorded it. Usually, h is a distorted version of f and the shape of f can't be easily recognized by the eye or simpler time-domain operations. The function g represents the impulse response of an instrument or a driving force that was applied to a physical system. If we know g, or at least know the form of g, then we can perform deterministic deconvolution. However, if we do not know g in advance, then we need to estimate it. This can be done using methods of statistical estimation or building the physical principles of the underlying system, such as the electrical circuit equations or diffusion equations. There are several deconvolution techniques, depend" https://en.wikipedia.org/wiki/Design%20closure,"Design Closure is a part of the digital electronic design automation workflow by which an integrated circuit (i.e. VLSI) design is modified from its initial description to meet a growing list of design constraints and objectives. Every step in the IC design (such as static timing analysis, placement, routing, and so on) is already complex and often forms its own field of study. This article, however, looks at the overall design closure process, which takes a chip from its initial design state to the final form in which all of its design constraints are met. Introduction Every chip starts off as someone’s idea of a good thing: ""If we can make a part that performs function X, we will all be rich!"" Once the concept is established, someone from marketing says ""To make this chip profitably, it must cost $C and run at frequency F."" Someone from manufacturing says ""To meet this chip’s targets, it must have a yield of Y%."" Someone from packaging says “It must fit in the P package and dissipate no more than W watts.” Eventually, the team generates an extensive list of all the constraints and objectives they must meet to manufacture a product that can be sold profitably. The management then forms a design team, which consists of chip architects, logic designers, functional verification engineers, physical designers, and timing engineers, and assigns them to create a chip to the specifications. Constraints vs Objectives The distinction between constraints and objectives is straightforward: a constraint is a design target that must be met for the design to be successful. For example, a chip may be required to run at a specific frequency so it can interface with other components in a system. In contrast, an objective is a design target where more (or less) is better. For example, yield is generally an objective, which is maximized to lower manufacturing cost. For the purposes of design closure, the distinction between constraints and objectives is not important; this artic" https://en.wikipedia.org/wiki/Holozoic%20nutrition,"Holozoic nutrition (Greek: holo-whole ; zoikos-of animals) is a type of heterotrophic nutrition that is characterized by the internalization (ingestion) and internal processing of liquids or solid food particles. Protozoa, such as amoebas, and most of the free living animals, such as humans, exhibit this type of nutrition where food is taken into the body as a liquid or solid and then further broken down is known as holozoic nutrition. Most animals exhibit this kind of nutrition. In Holozoic nutrition, the energy and organic building blocks are obtained by ingesting and then digesting other organisms or pieces of other organisms, including blood and decaying organic matter. This contrasts with holophytic nutrition, in which energy and organic building blocks are obtained through photosynthesis or chemosynthesis, and with saprozoic nutrition, in which digestive enzymes are released externally and the resulting monomers (small organic molecules) are absorbed directly from the environment. There are several stages of holozoic nutrition, which often occur in separate compartments within an organism (such as the stomach and intestines): Ingestion: In animals, this is merely taking food in through the mouth. In protozoa, this most commonly occurs through phagocytosis. Digestion: The physical breakdown of complex large and organic food particles and the enzymatic breakdown of complex organic compounds into small, simple molecules. Absorption: The active and passive transport of the chemical products of digestion out of the food-containing compartment and into the body 4. Assimilation: The chemical products used up for various metabolic processes" https://en.wikipedia.org/wiki/IAR%20Systems,"IAR Systems is a Swedish computer software company that offers development tools for embedded systems. IAR Systems was founded in 1983, and is listed on Nasdaq Nordic in Stockholm. IAR is an abbreviation of Ingenjörsfirma Anders Rundgren, which means Anders Rundgren Engineering Company. IAR Systems develops C and C++ language compilers, debuggers, and other tools for developing and debugging firmware for 8-, 16-, and 32-bit processors. The firm began in the 8-bit market, but moved into the expanding 32-bit market, more so for 32-bit microcontrollers. IAR Systems is headquartered in Uppsala, Sweden, and has more than 200 employees globally. The company operates subsidiaries in Germany, France, Japan, South Korea, China, United States, and United Kingdom and reaches the rest of the world through distributors. IAR Systems is a subsidiary of IAR Systems Group. Products IAR Embedded Workbench – a development environment that includes a C/C++ compiler, code analysis tools C-STAT and C-RUN, security tools C-Trust and Embedded Trust, and debugging and trace probes Functional Safety Certification option Visual State – a design tool for developing event-driven programming systems based on the event-driven finite-state machine paradigm. IAR Visual State presents the developer with the finite-state machine subset of Unified Modeling Language (UML) for C/C++ code generation. By restricting the design abilities to state machines, it is possible to employ formal model checking to find and flag unwanted properties like state dead-ends and unreachable parts of the design. It is not a full UML editor. IAR KickStart Kit – a series of software and hardware evaluation environments based on various microcontrollers. IAR Embedded Workbench The toolchain IAR Embedded Workbench, which supports more than 30 different processor families, is a complete integrated development environment (IDE) with compiler, analysis tools, debugger, functional safety, and security. The development too" https://en.wikipedia.org/wiki/List%20of%20Banach%20spaces,"In the mathematical field of functional analysis, Banach spaces are among the most important objects of study. In other areas of mathematical analysis, most spaces which arise in practice turn out to be Banach spaces as well. Classical Banach spaces According to , the classical Banach spaces are those defined by , which is the source for the following table. Banach spaces in other areas of analysis The Asplund spaces The Hardy spaces The space of functions of bounded mean oscillation The space of functions of bounded variation Sobolev spaces The Birnbaum–Orlicz spaces Hölder spaces Lorentz space Banach spaces serving as counterexamples James' space, a Banach space that has a Schauder basis, but has no unconditional Schauder Basis. Also, James' space is isometrically isomorphic to its double dual, but fails to be reflexive. Tsirelson space, a reflexive Banach space in which neither nor can be embedded. W.T. Gowers construction of a space that is isomorphic to but not serves as a counterexample for weakening the premises of the Schroeder–Bernstein theorem See also Notes" https://en.wikipedia.org/wiki/Paradox%20of%20the%20plankton,"In aquatic biology, the paradox of the plankton describes the situation in which a limited range of resources supports an unexpectedly wide range of plankton species, apparently flouting the competitive exclusion principle which holds that when two species compete for the same resource, one will be driven to extinction. Ecological paradox The paradox of the plankton results from the clash between the observed diversity of plankton and the competitive exclusion principle, also known as Gause's law, which states that, when two species compete for the same resource, ultimately only one will persist and the other will be driven to extinction. Coexistence between two such species is impossible because the dominant one will inevitably deplete the shared resources, thus decimating the inferior population. Phytoplankton life is diverse at all phylogenetic levels despite the limited range of resources (e.g. light, nitrate, phosphate, silicic acid, iron) for which they compete amongst themselves. The paradox of the plankton was originally described in 1961 by G. Evelyn Hutchinson, who proposed that the paradox could be resolved by factors such as vertical gradients of light or turbulence, symbiosis or commensalism, differential predation, or constantly changing environmental conditions. Later studies found that the paradox can be resolved by factors such as: zooplankton grazing pressure; chaotic fluid motion; size-selective grazing; spatio-temporal heterogeneity; bacterial mediation; or environmental fluctuations. In general, researchers suggest that ecological and environmental factors continually interact such that the planktonic habitat never reaches an equilibrium for which a single species is favoured. While it was long assumed that turbulence disrupts plankton patches at spatial scales less than a few metres, researchers using small-scale analysis of plankton distribution found that these exhibited patches of aggregation — on the order of 10 cm — that had suffic" https://en.wikipedia.org/wiki/Nocturnality,"Nocturnality is a behavior in some non-human animals characterized by being active during the night and sleeping during the day. The common adjective is ""nocturnal"", versus diurnal meaning the opposite. Nocturnal creatures generally have highly developed senses of hearing, smell, and specially adapted eyesight. Some animals, such as cats and ferrets, have eyes that can adapt to both low-level and bright day levels of illumination (see metaturnal). Others, such as bushbabies and (some) bats, can function only at night. Many nocturnal creatures including tarsiers and some owls have large eyes in comparison with their body size to compensate for the lower light levels at night. More specifically, they have been found to have a larger cornea relative to their eye size than diurnal creatures to increase their : in the low-light conditions. Nocturnality helps wasps, such as Apoica flavissima, avoid hunting in intense sunlight. Diurnal animals, including humans (except for night owls), squirrels and songbirds, are active during the daytime. Crepuscular species, such as rabbits, skunks, tigers and hyenas, are often erroneously referred to as nocturnal. Cathemeral species, such as fossas and lions, are active both in the day and at night. Origins While it is difficult to say which came first, nocturnality or diurnality, a hypothesis in evolutionary biology, the nocturnal bottleneck theory, postulates that in the Mesozoic, many ancestors of modern-day mammals evolved nocturnal characteristics in order to avoid contact with the numerous diurnal predators. A recent study attempts to answer the question as to why so many modern day mammals retain these nocturnal characteristics even though they are not active at night. The leading answer is that the high visual acuity that comes with diurnal characteristics is not needed anymore due to the evolution of compensatory sensory systems, such as a heightened sense of smell and more astute auditory systems. In a recent study, rece" https://en.wikipedia.org/wiki/Camera%20trap,"A camera trap is a camera that is automatically triggered by a change in some activity in its vicinity, like the presence of an animal or a human being. It is typically equipped with a motion sensor – usually a passive infrared (PIR) sensor or an active infrared (AIR) sensor using an infrared light beam. Camera trapping is a method for capturing wild animals on film when researchers are not present, and has been used in ecological research for decades. In addition to applications in hunting and wildlife viewing, research applications include studies of nest ecology, detection of rare species, estimation of population size and species richness, and research on habitat use and occupation of human-built structures. Camera traps, also known as trail cameras, are used to capture images of wildlife with as little human interference as possible. Since the introduction of commercial infrared-triggered cameras in the early 1990s, their use has increased. With advancements in the quality of camera equipment, this method of field observation has become more popular among researchers. Hunting has played an important role in development of camera traps, since hunters use them to scout for game. These hunters have opened a commercial market for the devices, leading to many improvements over time. Application The great advantage of camera traps is that they can record very accurate data without disturbing the photographed animal. These data are superior to human observations because they can be reviewed by other researchers. They minimally disturb wildlife and can replace the use of more invasive survey and monitoring techniques such as live trap and release. They operate continually and silently, provide proof of species present in an area, can reveal what prints and scats belong to which species, provide evidence for management and policy decisions, and are a cost-effective monitoring tool. Infrared flash cameras have low disturbance and visibility. Besides olfactory and aco" https://en.wikipedia.org/wiki/Palmitate%20mediated%20localization,"Palmitate mediated localization is a biological process that trafficks a palmitoylated protein to ordered lipid domains. Biological function One function is thought to cluster proteins to increase the efficiency of protein-protein interactions and facilitate biological processes. In the opposite scenario palmitate mediated localization sequesters proteins away from a non-localized molecule. In theory, disruption of palmitate mediated localization then allows a transient interaction of two molecules through lipid mixing. In the case of an enzyme, palmitate can sequester an enzyme away from its substrate. Disruption of palmitate mediated localization then activates the enzyme by substrate presentation. Mechanism of sequestration Palmitate mediated localization utilizes lipid partitioning and the formation of lipid rafts. Sequestration of palmitoylated proteins is regulated by cholesterol. Depletion of cholesterol with methyl-beta cyclodextrin disrupts palmitate mediated localization." https://en.wikipedia.org/wiki/P2PTV,"P2PTV refers to peer-to-peer (P2P) software applications designed to redistribute video streams in real time on a P2P network; the distributed video streams are typically TV channels from all over the world but may also come from other sources. The draw to these applications is significant because they have the potential to make any TV channel globally available by any individual feeding the stream into the network where each peer joining to watch the video is a relay to other peer viewers, allowing a scalable distribution among a large audience with no incremental cost for the source. Technology and use In a P2PTV system, each user, while downloading a video stream, is simultaneously also uploading that stream to other users, thus contributing to the overall available bandwidth. The arriving streams are typically a few minutes time-delayed compared to the original sources. The video quality of the channels usually depends on how many users are watching; the video quality is better if there are more users. The architecture of many P2PTV networks can be thought of as real-time versions of BitTorrent: if a user wishes to view a certain channel, the P2PTV software contacts a ""tracker server"" for that channel in order to obtain addresses of peers who distribute that channel; it then contacts these peers to receive the feed. The tracker records the user's address, so that it can be given to other users who wish to view the same channel. In effect, this creates an overlay network on top of the regular internet for the distribution of real-time video content. The need for a tracker can also be eliminated by the use of distributed hash table technology. Some applications allow users to broadcast their own streams, whether self-produced, obtained from a video file, or through a TV tuner card or video capture card. Many of the commercial P2PTV applications were developed in China (TVUPlayer, PPLive, QQLive, PPStream). The majority of available applications broadcast mainly" https://en.wikipedia.org/wiki/Byte%20addressing,"Byte addressing in hardware architectures supports accessing individual bytes. Computers with byte addressing are sometimes called byte machines, in contrast to word-addressable architectures, word machines, that access data by word. Background The basic unit of digital storage is a bit, storing a single 0 or 1. Many common instruction set architectures can address more than 8 bits of data at a time. For example, 32-bit x86 processors have 32-bit general-purpose registers and can handle 32-bit (4-byte) data in single instructions. However, data in memory may be of various lengths. Instruction sets that support byte addressing supports accessing data in units that are narrower than the word length. An eight-bit processor like the Intel 8008 addresses eight bits, but as this is the full width of the accumulator and other registers, this is could be considered either byte-addressable or word-addressable. 32-bit x86 processors, which address memory in 8-bit units but have 32-bit general-purpose registers and can operate on 32-bit items with a single instruction, are byte-addressable. The advantage of word addressing is that more memory can be addressed in the same number of bits. The IBM 7094 has 15-bit addresses, so could address 32,768 words of 36 bits. The machines were often built with a full complement of addressable memory. Addressing 32,768 bytes of 6 bits would have been much less useful for scientific and engineering users. Or consider 32-bit x86 processors. Their 32-bit linear addresses can address 4 billion different items. Using word addressing, a 32-bit processor could address 4 Gigawords; or 16 Gigabytes using the modern 8-bit byte. If the 386 and its successors had used word addressing, scientists, engineers, and gamers could all have run programs that were 4x larger on 32-bit machines. However, word processing, rendering HTML, and all other text applications would have run more slowly. When computers were so costly that they were only or mainly used" https://en.wikipedia.org/wiki/Computer-aided%20maintenance,"Computer-aided maintenance (not to be confused with CAM which usually stands for Computer Aided Manufacturing) refers to systems that utilize software to organize planning, scheduling, and support of maintenance and repair. A common application of such systems is the maintenance of computers, either hardware or software, themselves. It can also apply to the maintenance of other complex systems that require periodic maintenance, such as reminding operators that preventive maintenance is due or even predicting when such maintenance should be performed based on recorded past experience. Computer aided configuration The first computer-aided maintenance software came from DEC in the 1980s to configure VAX computers. The software was built using the techniques of artificial intelligence expert systems, because the problem of configuring a VAX required expert knowledge. During the research, the software was called R1 and was renamed XCON when placed in service. Fundamentally, XCON was a rule-based configuration database written as an expert system using forward chaining rules. As one of the first expert systems to be pressed into commercial service it created high expectations, which did not materialize, as DEC lost commercial pre-eminence. Help Desk software Help desks frequently use help desk software that captures symptoms of a bug and relates them to fixes, in a fix database. One of the problems with this approach is that the understanding of the problem is embodied in a non-human way, so that solutions are not unified. Strategies for finding fixes The bubble-up strategy simply records pairs of symptoms and fixes. The most frequent set of pairs is then presented as a tentative solution, which is then attempted. If the fix works, that fact is further recorded, along with the configuration of the presenting system, into a solutions database. Oddly enough, shutting down and booting up again manages to 'fix,' or at least 'mask,' a bug in many computer-based systems;" https://en.wikipedia.org/wiki/Ayanna%20Williams,"Ayanna Williams is an American who holds the world record for the longest fingernails ever reached on a single hand for a woman, with a combined length of 576.4 centimeters (181.09 inches). She is also ranked second in the list of having longest fingernails in the world considering both genders, just behind India's Shridhar Chillal who had a combined length of 1000.6 centimeters (358.1 inches). Ayanna was awarded the Guinness World Record in 2018 for being the woman with the longest finger nails in the world. Biography Ayanna pursued her interest in growing nails and engaged in nail art during her young age as a kid. She spent over 2 months to grow her nails without cutting them. Although proud of her record-breaking nails, Ayanna has faced increasing difficulties due to the weight of her finger nails. She found difficulties when engaging in day-to-day activities such as washing plates, dishes and putting sheets on bed. In 2021, she decided to cut her nails. On 9 April 2021, she had her fingernails cut by Allison Readinger of Trinity Vista Dermatology using an electronic rotary power tool at the Ripley's Believe It or Not! museum in New York City, where the nails had been put on display for public. The nails were measured for one last time in 2021 and the reading marked as 733.55 centimeters (240.7 inches) before cutting them down. See also Lee Redmond, who held the record for the longest fingernails on both hands." https://en.wikipedia.org/wiki/Competitive%20exclusion%20principle,"In ecology, the competitive exclusion principle, sometimes referred to as Gause's law, is a proposition that two species which compete for the same limited resource cannot coexist at constant population values. When one species has even the slightest advantage over another, the one with the advantage will dominate in the long term. This leads either to the extinction of the weaker competitor or to an evolutionary or behavioral shift toward a different ecological niche. The principle has been paraphrased in the maxim ""complete competitors can not coexist"". History The competitive exclusion principle is classically attributed to Georgy Gause, although he actually never formulated it. The principle is already present in Darwin's theory of natural selection. Throughout its history, the status of the principle has oscillated between a priori ('two species coexisting must have different niches') and experimental truth ('we find that species coexisting do have different niches'). Experimental basis Based on field observations, Joseph Grinnell formulated the principle of competitive exclusion in 1904: ""Two species of approximately the same food habits are not likely to remain long evenly balanced in numbers in the same region. One will crowd out the other"". Georgy Gause formulated the law of competitive exclusion based on laboratory competition experiments using two species of Paramecium, P. aurelia and P. caudatum. The conditions were to add fresh water every day and input a constant flow of food. Although P. caudatum initially dominated, P. aurelia recovered and subsequently drove P. caudatum extinct via exploitative resource competition. However, Gause was able to let the P. caudatum survive by differing the environmental parameters (food, water). Thus, Gause's law is valid only if the ecological factors are constant. Gause also studied competition between two species of yeast, finding that Saccharomyces cerevisiae consistently outcompeted Schizosaccharomyces kefir " https://en.wikipedia.org/wiki/Windows%20Vista%20networking%20technologies,"In computing, Microsoft's Windows Vista and Windows Server 2008 introduced in 2007/2008 a new networking stack named Next Generation TCP/IP stack, to improve on the previous stack in several ways. The stack includes native implementation of IPv6, as well as a complete overhaul of IPv4. The new TCP/IP stack uses a new method to store configuration settings that enables more dynamic control and does not require a computer restart after a change in settings. The new stack, implemented as a dual-stack model, depends on a strong host-model and features an infrastructure to enable more modular components that one can dynamically insert and remove. Architecture The Next Generation TCP/IP stack connects to NICs via a Network Driver Interface Specification (NDIS) driver. The network stack, implemented in tcpip.sys implements the Transport, Network and Data link layers of the TCP/IP model. The Transport layer includes implementations for TCP, UDP and unformatted RAW protocols. At the Network layer, IPv4 and IPv6 protocols are implemented in a dual-stack architecture. And the Data link layer (also called Framing layer) implements 802.3, 802.1, PPP, Loopback and tunnelling protocols. Each layer can accommodate Windows Filtering Platform (WFP) shims, which allows packets at that layer to be introspected and also host the WFP Callout API. The networking API is exposed via three components: Winsock A user mode API for abstracting network communication using sockets and ports. Datagram sockets are used for UDP, whereas Stream sockets are for TCP. While Winsock is a user mode library, it uses a kernel mode driver, called Ancillary Function Driver (AFD) to implement certain functionality. Winsock Kernel (WSK) A kernel-mode API providing the same socket-and-port abstraction as Winsock, while exposing other features such as Asynchronous I/O using I/O request packets. Transport Driver Interface (TDI) A kernel-mode API which can be used for legacy protocols like NetBIOS. It includes a " https://en.wikipedia.org/wiki/Monolithic%20microwave%20integrated%20circuit,"Monolithic microwave integrated circuit, or MMIC (sometimes pronounced ""mimic""), is a type of integrated circuit (IC) device that operates at microwave frequencies (300 MHz to 300 GHz). These devices typically perform functions such as microwave mixing, power amplification, low-noise amplification, and high-frequency switching. Inputs and outputs on MMIC devices are frequently matched to a characteristic impedance of 50 ohms. This makes them easier to use, as cascading of MMICs does not then require an external matching network. Additionally, most microwave test equipment is designed to operate in a 50-ohm environment. MMICs are dimensionally small (from around 1 mm² to 10 mm²) and can be mass-produced, which has allowed the proliferation of high-frequency devices such as cellular phones. MMICs were originally fabricated using gallium arsenide (GaAs), a III-V compound semiconductor. It has two fundamental advantages over silicon (Si), the traditional material for IC realisation: device (transistor) speed and a semi-insulating substrate. Both factors help with the design of high-frequency circuit functions. However, the speed of Si-based technologies has gradually increased as transistor feature sizes have reduced, and MMICs can now also be fabricated in Si technology. The primary advantage of Si technology is its lower fabrication cost compared with GaAs. Silicon wafer diameters are larger (typically 8"" to 12"" compared with 4"" to 8"" for GaAs) and the wafer costs are lower, contributing to a less expensive IC. Originally, MMICs used metal-semiconductor field-effect transistors (MESFETs) as the active device. More recently high-electron-mobility transistor (HEMTs), pseudomorphic HEMTs and heterojunction bipolar transistors have become common. Other III-V technologies, such as indium phosphide (InP), have been shown to offer superior performance to GaAs in terms of gain, higher cutoff frequency, and low noise. However, they also tend to be more expensive due to smal" https://en.wikipedia.org/wiki/Adam%27s%20apple,"The Adam's apple or laryngeal prominence is the protrusion in the human neck formed by the angle of the thyroid cartilage surrounding the larynx, typically visible in men, less frequently in women. The prominence of the Adam's apple increases as a secondary male sex characteristic in puberty. Structure The topographic structure which is externally visible and colloquially called the ""Adam's apple"" is caused by an anatomical structure of the thyroid cartilage called the laryngeal prominence or laryngeal protuberance protruding and forming a ""bump"" under the skin at the front of the throat. All human beings with a normal anatomy have a laryngeal protuberance of the thyroid cartilage. This prominence is typically larger and more externally noticeable in adult males. There are two reasons for this phenomenon. Firstly, the structural size of the thyroid cartilage in males tends to increase during puberty, and the laryngeal protuberance becomes more anteriorly focused. Secondly, the larynx, which the thyroid cartilage partially envelops, increases in size in male subjects during adolescence, moving the thyroid cartilage and its laryngeal protuberance towards the front of the neck. The adolescent development of both the larynx and the thyroid cartilage in males occur as a result of hormonal changes, especially the normal increase in testosterone production in adolescent males. In females, the laryngeal protuberance sits on the upper edge of the thyroid cartilage, and the larynx tends to be smaller in size, and so the ""bump"" caused by protrusion of the laryngeal protuberance is much less visible or not discernible. Even so, many women display an externally visible protrusion of the thyroid cartilage, an ""Adam's apple"", to varying degrees which are usually minor, and this should not normally be viewed as a medical disorder. Function The Adam's apple, in relation with the thyroid cartilage which forms it, helps protect the walls and the frontal part of the larynx, includin" https://en.wikipedia.org/wiki/Language%20of%20mathematics,"The language of mathematics or mathematical language is an extension of the natural language (for example English) that is used in mathematics and in science for expressing results (scientific laws, theorems, proofs, logical deductions, etc) with concision, precision and unambiguity. Features The main features of the mathematical language are the following. Use of common words with a derived meaning, generally more specific and more precise. For example, ""or"" means ""one, the other or both"", while, in common language, ""both"" is sometimes included and sometimes not. Also, a ""line"" is straight and has zero width. Use of common words with a meaning that is completely different from their common meaning. For example, a mathematical ring is not related to any other meaning of ""ring"". Real numbers and imaginary numbers are two sorts of numbers, none being more real or more imaginary than the others. Use of neologisms. For example polynomial, homomorphism. Use of symbols as words or phrases. For example, and are respectively read as "" equals "" and Use of formulas as part of sentences. For example: "" represents quantitatively the mass–energy equivalence."" A formula that is not included in a sentence is generally meaningless, since the meaning of the symbols may depend on the context: in this is the context that specifies that is the energy of a physical body, is its mass, and is the speed of light. Use of mathematical jargon that consists of phrases that are used for informal explanations or shorthands. For example, ""killing"" is often used in place of ""replacing with zero"", and this led to the use of assassinator and annihilator as technical words. Understanding mathematical text The consequence of these features is that a mathematical text is generally not understandable without some prerequisite knowledge. For example the sentence ""a free module is a module that has a basis"" is perfectly correct, although it appears only as a grammatically correct nonsense, " https://en.wikipedia.org/wiki/Organography,"Organography (from Greek , organo, ""organ""; and , -graphy) is the scientific description of the structure and function of the organs of living things. History Organography as a scientific study starts with Aristotle, who considered the parts of plants as ""organs"" and began to consider the relationship between different organs and different functions. In the 17th century Joachim Jung, clearly articulated that plants are composed of different organ types such as root, stem and leaf, and he went on to define these organ types on the basis of form and position. In the following century Caspar Friedrich Wolff was able to follow the development of organs from the ""growing points"" or apical meristems. He noted the commonality of development between foliage leaves and floral leaves (e.g. petals) and wrote: ""In the whole plant, whose parts we wonder at as being, at the first glance, so extraordinarily diverse, I finally perceive and recognize nothing beyond leaves and stem (for the root may be regarded as a stem). Consequently all parts of the plant, except the stem, are modified leaves."" Similar views were propounded at by Goethe in his well-known treatise. He wrote: ""The underlying relationship between the various external parts of the plant, such as the leaves, the calyx, the corolla, the stamens, which develop one after the other and, as it were, out of one another, has long been generally recognized by investigators, and has in fact been specially studied; and the operation by which one and the same organ presents itself to us in various forms has been termed Metamorphosis of Plants."" See also morphology (biology)" https://en.wikipedia.org/wiki/Single-core,"A single-core processor is a microprocessor with a single core on its die. It performs the fetch-decode-execute cycle once per clock-cycle, as it only runs on one thread. A computer using a single core CPU is generally slower than a multi-core system. Single core processors used to be widespread in desktop computers, but as applications demanded more processing power, the slower speed of single core systems became a detriment to performance. Windows supported single-core processors up until the release of Windows 11, where a dual-core processor is required. Single core processors are still in use in some niche circumstances. Some older legacy systems like those running antiquated operating systems (e.g. Windows 98) cannot gain any benefit from multi-core processors. Single core processors are also used in hobbyist computers like the Raspberry Pi and Single-board microcontrollers. The production of single-core desktop processors ended in 2013 with the Celeron G470. Development The first single core processor was the Intel 4004, which was commercially released on November 15, 1971 by Intel. Since then many improvements have been made to single core processors, going from the 740 KHz of the Intel 4004 to the 2 GHz Celeron G470. Advantages Single core processors draw less power than larger, multi-core processors. Single core processors can be made a lot more cheaply than multi core systems, meaning they can be used in embedded systems. Disadvantages Single core processors are generally outperformed by multi-core processors. Single core processors are more likely to bottleneck with faster peripheral components, as these components have to wait for the CPU to finish its cycle. Single core processors lack parallelisation, meaning only one application can run at once. This reduces performance as other processes have to wait for processor time, leading to process starvation. Increasing parallel trend Single-core one processor on a die. Since about 2012, e" https://en.wikipedia.org/wiki/Multi-project%20wafer%20service,"Multi-project chip (MPC), and multi-project wafer (MPW) semiconductor manufacturing arrangements allow customers to share mask and microelectronics wafer fabrication cost between several designs or projects. With the MPC arrangement, one chip is a combination of several designs and this combined chip is then repeated all over the wafer during the manufacturing. MPC arrangement produces typically roughly equal number of chip designs per wafer. With the MPW arrangement, different chip designs are aggregated on a wafer, with perhaps a different number of designs/projects per wafer. This is made possible with novel mask making and exposure systems in photolithography during IC manufacturing. MPW builds upon the older MPC procedures and enables more effective support for different phases and needs of manufacturing volumes of different designs/projects. MPW arrangement support education, research of new circuit architectures and structures, prototyping and even small volume production. Worldwide, several MPW services are available from companies, semiconductor foundries and from government-supported institutions. Originally both MPC and MPW arrangements were introduced for integrated circuit (IC) education and research; some MPC/MPW services/gateways are aimed for non-commercial use only. Currently MPC/MPW services are effectively used for system on a chip integration. Selecting the right service platform at the prototyping phase ensures gradual scaling up production via MPW services taking into account the rules of the selected service. MPC/MPW arrangements have also been applied to microelectromechanical systems (MEMS), integrated photonics like silicon photonics fabrication and microfluidics. A refinement of MPW is multi-layer mask (MLM) arrangement, where a limited number of masks (e.g. 4) are changed during manufacturing at exposure phase. The rest of the masks are the same from the chip to chip on the whole wafer. MLM approach is well suited for several specifi" https://en.wikipedia.org/wiki/Jim%20Williams%20%28analog%20designer%29,"James M. Williams (April 14, 1948 – June 12, 2011) was an analog circuit designer and technical author who worked for the Massachusetts Institute of Technology (1968–1979), Philbrick, National Semiconductor (1979–1982) and Linear Technology Corporation (LTC) (1982–2011). He wrote over 350 publications relating to analog circuit design, including five books, 21 application notes for National Semiconductor, 62 application notes for Linear Technology, and over 125 articles for EDN Magazine. Williams suffered a stroke on June 10 and died on June 12, 2011. Bibliography (partial) For a complete bibliography, see. See also Paul Brokaw Barrie Gilbert Howard Johnson (electrical engineer) Bob Pease — analog electronics engineer, technical author, and colleague. Pease died in an automobile accident after leaving Williams' memorial. Bob Widlar — pioneering analog integrated circuit designer, technical author, early consultant to Linear Technology Corporation Building 20 — legendary MIT building where Jim Williams had a design lab early in his career" https://en.wikipedia.org/wiki/Signal%20compression,"Signal compression is the use of various techniques to increase the quality or quantity of signal parameters transmitted through a given telecommunications channel. Types of signal compression include: Bandwidth compression Data compression Dynamic range compression Gain compression Image compression Lossy compression One-way compression function Compression Telecommunications techniques he:דחיסת אותות" https://en.wikipedia.org/wiki/Food%20choice,"Research into food choice investigates how people select the food they eat. An interdisciplinary topic, food choice comprises psychological and sociological aspects (including food politics and phenomena such as vegetarianism or religious dietary laws), economic issues (for instance, how food prices or marketing campaigns influence choice) and sensory aspects (such as the study of the organoleptic qualities of food). Factors that guide food choice include taste preference, sensory attributes, cost, availability, convenience, cognitive restraint, and cultural familiarity. In addition, environmental cues and increased portion sizes play a role in the choice and amount of foods consumed. Food choice is the subject of research in nutrition, food science, food psychology, anthropology, sociology, and other branches of the natural and social sciences. It is of practical interest to the food industry and especially its marketing endeavors. Social scientists have developed different conceptual frameworks of food choice behavior. Theoretical models of behavior incorporate both individual and environmental factors affecting the formation or modification of behaviors. Social cognitive theory examines the interaction of environmental, personal, and behavioral factors. Taste preference Researchers have found that consumers cite taste as the primary determinant of food choice. Genetic differences in the ability to perceive bitter taste are believed to play a role in the willingness to eat bitter-tasting vegetables and in the preferences for sweet taste and fat content of foods. Approximately 25 percent of the US population are supertasters and 50 percent are tasters. Epidemiological studies suggest that nontasters are more likely to eat a wider variety of foods and to have a higher body mass index (BMI), a measure of weight in kilograms divided by height in meters squared. Environmental influences Many environmental cues influence food choice and intake, although consumers m" https://en.wikipedia.org/wiki/List%20of%20conjectures%20by%20Paul%20Erd%C5%91s,"The prolific mathematician Paul Erdős and his various collaborators made many famous mathematical conjectures, over a wide field of subjects, and in many cases Erdős offered monetary rewards for solving them. Unsolved The Erdős–Gyárfás conjecture on cycles with lengths equal to a power of two in graphs with minimum degree 3. The Erdős–Hajnal conjecture that in a family of graphs defined by an excluded induced subgraph, every graph has either a large clique or a large independent set. The Erdős–Mollin–Walsh conjecture on consecutive triples of powerful numbers. The Erdős–Selfridge conjecture that a covering system with distinct moduli contains at least one even modulus. The Erdős–Straus conjecture on the Diophantine equation 4/n = 1/x + 1/y + 1/z. The Erdős conjecture on arithmetic progressions in sequences with divergent sums of reciprocals. The Erdős–Szekeres conjecture on the number of points needed to ensure that a point set contains a large convex polygon. The Erdős–Turán conjecture on additive bases of natural numbers. A conjecture on quickly growing integer sequences with rational reciprocal series. A conjecture with Norman Oler on circle packing in an equilateral triangle with a number of circles one less than a triangular number. The minimum overlap problem to estimate the limit of M(n). A conjecture that the ternary expansion of contains at least one digit 2 for every . Solved The Erdős–Faber–Lovász conjecture on coloring unions of cliques, proved (for all large n) by Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, and Deryk Osthus. The Erdős sumset conjecture on sets, proven by Joel Moreira, Florian Karl Richter, Donald Robertson in 2018. The proof has appeared in ""Annals of Mathematics"" in March 2019. The Burr–Erdős conjecture on Ramsey numbers of graphs, proved by Choongbum Lee in 2015. A conjecture on equitable colorings proven in 1970 by András Hajnal and Endre Szemerédi and now known as the Hajnal–Szemerédi theorem. A co" https://en.wikipedia.org/wiki/Mechanism%20%28biology%29,"In the science of biology, a mechanism is a system of causally interacting parts and processes that produce one or more effects. Scientists explain phenomena by describing mechanisms that could produce the phenomena. For example, natural selection is a mechanism of biological evolution; other mechanisms of evolution include genetic drift, mutation, and gene flow. In ecology, mechanisms such as predation and host-parasite interactions produce change in ecological systems. In practice, no description of a mechanism is ever complete because not all details of the parts and processes of a mechanism are fully known. For example, natural selection is a mechanism of evolution that includes countless, inter-individual interactions with other individuals, components, and processes of the environment in which natural selection operates. Characterizations/ definitions Many characterizations/definitions of mechanisms in the philosophy of science/biology have been provided in the past decades. For example, one influential characterization of neuro- and molecular biological mechanisms by Peter K. Machamer, Lindley Darden and Carl Craver is as follows: mechanisms are entities and activities organized such that they are productive of regular changes from start to termination conditions. Other characterizations have been proposed by Stuart Glennan (1996, 2002), who articulates an interactionist account of mechanisms, and William Bechtel (1993, 2006), who emphasizes parts and operations. The characterization by Machemer et al. is as follows: mechanisms are entities and activities organized such that they are productive of changes from start conditions to termination conditions. There are three distinguishable aspects of this characterization: Ontic aspect The ontic constituency of biological mechanisms includes entities and activities. Thus, this conception postulates a dualistic ontology of mechanisms, where entities are substantial components, and activities are reified compon" https://en.wikipedia.org/wiki/Shell%20theorem,"In classical mechanics, the shell theorem gives gravitational simplifications that can be applied to objects inside or outside a spherically symmetrical body. This theorem has particular application to astronomy. Isaac Newton proved the shell theorem and stated that: A spherically symmetric body affects external objects gravitationally as though all of its mass were concentrated at a point at its center. If the body is a spherically symmetric shell (i.e., a hollow ball), no net gravitational force is exerted by the shell on any object inside, regardless of the object's location within the shell. A corollary is that inside a solid sphere of constant density, the gravitational force within the object varies linearly with distance from the center, becoming zero by symmetry at the center of mass. This can be seen as follows: take a point within such a sphere, at a distance from the center of the sphere. Then you can ignore all of the shells of greater radius, according to the shell theorem (2). But the point can be considered to be external to the remaining sphere of radius r, and according to (1) all of the mass of this sphere can be considered to be concentrated at its centre. The remaining mass is proportional to (because it is based on volume). The gravitational force exerted on a body at radius r will be proportional to (the inverse square law), so the overall gravitational effect is proportional to so is linear in These results were important to Newton's analysis of planetary motion; they are not immediately obvious, but they can be proven with calculus. (Gauss's law for gravity offers an alternative way to state the theorem.) In addition to gravity, the shell theorem can also be used to describe the electric field generated by a static spherically symmetric charge density, or similarly for any other phenomenon that follows an inverse square law. The derivations below focus on gravity, but the results can easily be generalized to the electrostatic forc" https://en.wikipedia.org/wiki/Mathematical%20methods%20in%20electronics,"Mathematical methods are integral to the study of electronics. Mathematics in electronics Electronics engineering careers usually include courses in calculus (single and multivariable), complex analysis, differential equations (both ordinary and partial), linear algebra and probability. Fourier analysis and Z-transforms are also subjects which are usually included in electrical engineering programs. Laplace transform can simplify computing RLC circuit behaviour. Basic applications A number of electrical laws apply to all electrical networks. These include Faraday's law of induction: Any change in the magnetic environment of a coil of wire will cause a voltage (emf) to be ""induced"" in the coil. Gauss's Law: The total of the electric flux out of a closed surface is equal to the charge enclosed divided by the permittivity. Kirchhoff's current law: the sum of all currents entering a node is equal to the sum of all currents leaving the node or the sum of total current at a junction is zero Kirchhoff's voltage law: the directed sum of the electrical potential differences around a circuit must be zero. Ohm's law: the voltage across a resistor is the product of its resistance and the current flowing through it.at constant temperature. Norton's theorem: any two-terminal collection of voltage sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenin's theorem: any two-terminal combination of voltage sources and resistors is electrically equivalent to a single voltage source in series with a single resistor. Millman's theorem: the voltage on the ends of branches in parallel is equal to the sum of the currents flowing in every branch divided by the total equivalent conductance. See also Analysis of resistive circuits. Circuit analysis is the study of methods to solve linear systems for an unknown variable. Circuit analysis Components There are many electronic components currently used and they all have thei" https://en.wikipedia.org/wiki/Equalization%20%28communications%29,"In telecommunication, equalization is the reversal of distortion incurred by a signal transmitted through a channel. Equalizers are used to render the frequency response—for instance of a telephone line—flat from end-to-end. When a channel has been equalized the frequency domain attributes of the signal at the input are faithfully reproduced at the output. Telephones, DSL lines and television cables use equalizers to prepare data signals for transmission. Equalizers are critical to the successful operation of electronic systems such as analog broadcast television. In this application the actual waveform of the transmitted signal must be preserved, not just its frequency content. Equalizing filters must cancel out any group delay and phase delay between different frequency components. Analog telecommunications Audio lines Early telephone systems used equalization to correct for the reduced level of high frequencies in long cables, typically using Zobel networks. These kinds of equalizers can also be used to produce a circuit with a wider bandwidth than the standard telephone band of 300 Hz to 3.4 kHz. This was particularly useful for broadcasters who needed ""music"" quality, not ""telephone"" quality on landlines carrying program material. It is necessary to remove or cancel any loading coils in the line before equalization can be successful. Equalization was also applied to correct the response of the transducers, for example, a particular microphone might be more sensitive to low frequency sounds than to high frequency sounds, so an equalizer would be used to increase the volume of the higher frequencies (boost), and reduce the volume of the low frequency sounds (cut). Television lines A similar approach to audio was taken with television landlines with two important additional complications. The first of these is that the television signal is a wide bandwidth covering many more octaves than an audio signal. A television equalizer consequently typically req" https://en.wikipedia.org/wiki/Memory%20management,"Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management. Virtual memory systems separate the memory addresses used by a process from actual physical addresses, allowing separation of processes and increasing the size of the virtual address space beyond the available amount of RAM using paging or swapping to secondary storage. The quality of the virtual memory manager can have an extensive effect on overall system performance. In some operating systems, e.g. OS/360 and successors, memory is managed by the operating system. In other operating systems, e.g. Unix-like operating systems, memory is managed at the application level. Memory management within an address space is generally categorized as either manual memory management or automatic memory management. Manual memory management The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are ""free"" (unused) and thus available for future allocations. In the C language, the function which allocates memory from the heap is called and the function which takes previously allocated memory and marks it as ""free"" (to be used by future allocations) is called . Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadat" https://en.wikipedia.org/wiki/Synchronous%20detector,"In electronics, a synchronous detector is a device that recovers information from a modulated signal by mixing the signal with a replica of the unmodulated carrier. This can be locally generated at the receiver using a phase-locked loop or other techniques. Synchronous detection preserves any phase information originally present in the modulating signal. With the exception of SECAM receivers, synchronous detection is a necessary component of any analog color television receiver, where it allows recovery of the phase information that conveys hue. Synchronous detectors are also found in some shortwave radio receivers used for audio signals, where they provide better performance on signals that may be affected by fading. See also Lock-in amplifier" https://en.wikipedia.org/wiki/Limiting%20case%20%28mathematics%29,"In mathematics, a limiting case of a mathematical object is a special case that arises when one or more components of the object take on their most extreme possible values. For example: In statistics, the limiting case of the binomial distribution is the Poisson distribution. As the number of events tends to infinity in the binomial distribution, the random variable changes from the binomial to the Poisson distribution. A circle is a limiting case of various other figures, including the Cartesian oval, the ellipse, the superellipse, and the Cassini oval. Each type of figure is a circle for certain values of the defining parameters, and the generic figure appears more like a circle as the limiting values are approached. Archimedes calculated an approximate value of π by treating the circle as the limiting case of a regular polygon with 3 × 2n sides, as n gets large. In electricity and magnetism, the long wavelength limit is the limiting case when the wavelength is much larger than the system size. In economics, two limiting cases of a demand curve or supply curve are those in which the elasticity is zero (the totally inelastic case) or infinity (the infinitely elastic case). In finance, continuous compounding is the limiting case of compound interest in which the compounding period becomes infinitesimally small, achieved by taking the limit as the number of compounding periods per year goes to infinity. A limiting case is sometimes a degenerate case in which some qualitative properties differ from the corresponding properties of the generic case. For example: A point is a degenerate circle, namely one with radius 0. A parabola can degenerate into two distinct or coinciding parallel lines. An ellipse can degenerate into a single point or a line segment. A hyperbola can degenerate into two intersecting lines. See also Degeneracy (mathematics) Limit (mathematics)" https://en.wikipedia.org/wiki/Bendix%20Electrojector,"The Bendix Electrojector is an electronically controlled manifold injection (EFI) system developed and made by Bendix Corporation. In 1957, American Motors (AMC) offered the Electrojector as an option in some of their cars; Chrysler followed in 1958. However, it proved to be an unreliable system that was soon replaced by conventional carburetors. The Electrojector patents were then sold to German car component supplier Bosch, who developed the Electrojector into a functioning system, the Bosch D-Jetronic, introduced in 1967. Description The Electrojector is an electronically controlled multi-point injection system that has an analogue engine control unit, the so-called ""modulator"" that uses the intake manifold vacuum and the engine speed for metering the right amount of fuel. The fuel is injected intermittently, and with a constant pressure of . The injectors are spring-loaded active injectors, actuated by a modulator-controlled electromagnet. Pulse-width modulation is used to change the amount of injected fuel: since the injection pressure is constant, the fuel amount can only be changed by increasing or decreasing the injection pulse duration. The modulator receives the injection pulse from an injection pulse generator that rotates in sync with the ignition distributor. The modulator converts the injection pulse into a correct injection signal for each fuel injector primarily by using the intake manifold and crankshaft speed sensor signals. It uses analogue transistor technology (i. e. no microprocessor) to do so. The system also supports setting the correct idle speed, mixture enrichment, and coolant temperature using additional resistors in the modulator. History The Electrojector was first offered by American Motors Corporation (AMC) in 1957. The Rambler Rebel was used to promote AMC's new engine. The Electrojector-injected engine was an option and rated at . It produced peak torque 500 rpm lower than the equivalent carburetor engine The cost of the EFI " https://en.wikipedia.org/wiki/Security%20log,"A security log is used to track security-related information on a computer system. Examples include: Windows Security Log Internet Connection Firewall security log According to Stefan Axelsson, ""Most UNIX installations do not run any form of security logging software, mainly because the security logging facilities are expensive in terms of disk storage, processing time, and the cost associated with analyzing the audit trail, either manually or by special software."" See also Audit trail Server log Log management and intelligence Web log analysis software Web counter Data logging Common Log Format Syslog" https://en.wikipedia.org/wiki/Mathemalchemy,"Mathemalchemy is a traveling art installation dedicated to a celebration of the intersection of art and mathematics. It is a collaborative work led by Duke University mathematician Ingrid Daubechies and fiber artist Dominique Ehrmann. The cross-disciplinary team of 24 people, who collectively built the installation during the calendar years 2020 and 2021, includes artists, mathematicians, and craftspeople who employed a wide variety of materials to illustrate, amuse, and educate the public on the wonders, mystery, and beauty of mathematics. Including the core team of 24, about 70 people contributed in some way to the realization of Mathemalchemy. Description The art installation occupies a footprint approximately , which extends up to in height (in addition, small custom-fabricated tables are arranged around the periphery to protect the more fragile elements). A map shows the 14 or so different zones or regions within the exhibit, which is filled with hundreds of detailed mathematical artifacts, some smaller than ; the entire exhibit comprises more than 1,000 parts which must be packed for shipment. Versions of some of the complex mathematical objects can be purchased through an associated ""Mathemalchemy Boutique"" website. The art installation contains puns (such as ""Pi"" in a bakery) and Easter eggs, such as a miniature model of the Antikythera mechanism hidden on the bottom of ""Knotilus Bay"". Mathematically sophisticated visitors may enjoy puzzling out and decoding the many mathematical allusions symbolized in the exhibit, while viewers of all levels are invited to enjoy the self-guided tours, detailed explanations, and videos available on the accompanying official website . A downloadable comic book was created to explore some of the themes of the exhibition, using an independent narrative set in the world of Mathemalchemy. Themes The installation features or illustrates mathematical concepts at many different levels. All of the participants regard ""recre" https://en.wikipedia.org/wiki/Action%20at%20a%20distance,"In physics, action at a distance is the concept that an object's motion can be affected by another object without being physically contact (as in mechanical contact) by the other object. That is, it is the non-local interaction of objects that are separated in space. Coulomb's law and Newton's law of universal gravitation are based on action at a distance. Historically, action at a distance was the earliest scientific model for gravity and electricity and it continues to be useful in many practical cases. In the 19th and 20th centuries, field models arose to explain these phenomena with more precision. The discovery of electrons and of special relativity lead to new action at a distance models providing alternative to field theories. Categories of action In the study of mechanics, action at a distance is one of three fundamental actions on matter that cause motion. The other two are direct impact (elastic or inelastic collisions) and actions in a continuous medium as in fluid mechanics or solid mechanics. Historically, physical explanations for particular phenomena have moved between these three categories over time as new models were developed. Action at a distance and actions in a continuous medium may be easily distinguished when the medium dynamics are visible, like waves in water or in an elastic solid. In the case of electricity or gravity, there is no medium required. In the nineteenth century, criteria like the effect of actions on intervening matter, the observation of a time delay, the apparent storage of energy, or even the possibility of a plausible mechanical model for action transmission were all accepted as evidence against action at a distance. Aether theories were alternative proposals to replace apparent action-at-a-distance in gravity and electromagnetism, in terms of continuous action inside an (invisible) medium called ""aether"". Roles The concept of action at a distance acts in multiple roles in physics and it can co-exist with other mode" https://en.wikipedia.org/wiki/4000-series%20integrated%20circuits,"The 4000 series is a CMOS logic family of integrated circuits (ICs) first introduced in 1968 by RCA. It was slowly migrated into the 4000B buffered series after about 1975. It had a much wider supply voltage range than any contemporary logic family (3V to 18V recommended range for ""B"" series). Almost all IC manufacturers active during this initial era fabricated models for this series. Its naming convention is still in use today. History The 4000 series was introduced as the CD4000 COS/MOS series in 1968 by RCA as a lower power and more versatile alternative to the 7400 series of transistor-transistor logic (TTL) chips. The logic functions were implemented with the newly introduced Complementary Metal–Oxide–Semiconductor (CMOS) technology. While initially marketed with ""COS/MOS"" labeling by RCA (which stood for Complementary Symmetry Metal-Oxide Semiconductor), the shorter CMOS terminology emerged as the industry preference to refer to the technology. The first chips in the series were designed by a group led by Albert Medwin. Wide adoption was initially hindered by the comparatively lower speeds of the designs compared to TTL based designs. Speed limitations were eventually overcome with newer fabrication methods (such as self aligned gates of polysilicon instead of metal). These CMOS variants performed on par with contemporary TTL. The series was extended in the late 1970s and 1980s with new models that were given 45xx and 45xxx designations, but are usually still regarded by engineers as part of the 4000 series. In the 1990s, some manufacturers (e.g. Texas Instruments) ported the 4000 series to newer HCMOS based designs to provide greater speeds. Design considerations The 4000 series facilitates simpler circuit design through relatively low power consumption, a wide range of supply voltages, and vastly increased load-driving capability (fanout) compared to TTL. This makes the series ideal for use in prototyping LSI designs. While TTL ICs are similarly modular" https://en.wikipedia.org/wiki/Design%20rule%20checking,"In electronic design automation, a design rule is a geometric constraint imposed on circuit board, semiconductor device, and integrated circuit (IC) designers to ensure their designs function properly, reliably, and can be produced with acceptable yield. Design rules for production are developed by process engineers based on the capability of their processes to realize design intent. Electronic design automation is used extensively to ensure that designers do not violate design rules; a process called design rule checking (DRC). DRC is a major step during physical verification signoff on the design, which also involves LVS (layout versus schematic) checks, XOR checks, ERC (electrical rule check), and antenna checks. The importance of design rules and DRC is greatest for ICs, which have micro- or nano-scale geometries; for advanced processes, some fabs also insist upon the use of more restricted rules to improve yield. Design rules Design rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a particular semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly. The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007). A two layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might s" https://en.wikipedia.org/wiki/Temporal%20resolution,"Temporal resolution (TR) refers to the discrete resolution of a measurement with respect to time. Physics Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution, due to Heisenberg's uncertainty principle. In some contexts, such as particle physics, this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution. Technology Computing In another context, there is often a tradeoff between temporal resolution and computer storage. A transducer may be able to record data every millisecond, but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes. Electronic displays In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate, or update frequency in Hertz, of a TV, for example. The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution. One is discrete, the other, continuous. The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty. Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio, since both also have measurement resolution. Oscilloscopy An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a m" https://en.wikipedia.org/wiki/Crypsis,"In ecology, crypsis is the ability of an animal or a plant to avoid observation or detection by other animals. It may be a predation strategy or an antipredator adaptation. Methods include camouflage, nocturnality, subterranean lifestyle and mimicry. Crypsis can involve visual, olfactory (with pheromones) or auditory concealment. When it is visual, the term cryptic coloration, effectively a synonym for animal camouflage, is sometimes used, but many different methods of camouflage are employed by animals or plants. Overview There is a strong evolutionary pressure for animals to blend into their environment or conceal their shape, for prey animals to avoid predators and for predators to be able to avoid detection by prey. Exceptions include large herbivores without natural enemies, brilliantly colored birds that rely on flight to escape predators, and venomous or otherwise powerfully armed animals with warning coloration. Cryptic animals include the tawny frogmouth (feather patterning resembles bark), the tuatara (hides in burrows all day; nocturnal), some jellyfish (transparent), the leafy sea dragon, and the flounder (covers itself in sediment). Methods Methods of crypsis include (visual) camouflage, nocturnality, and subterranean lifestyle. Camouflage can be achieved by a wide variety of methods, from disruptive coloration to transparency and some forms of mimicry, even in habitats like the open sea where there is no background. As a strategy, crypsis is used by predators against prey and by prey against predators. Crypsis also applies to eggs and pheromone production. Crypsis can in principle involve visual, olfactory, or auditory camouflage. Visual Many animals have evolved so that they visually resemble their surroundings by using any of the many methods of natural camouflage that may match the color and texture of the surroundings (cryptic coloration) and/or break up the visual outline of the animal itself (disruptive coloration). Such animals, like the " https://en.wikipedia.org/wiki/Food%20pairing,"Food pairing (or flavor pairing or food combination) is a method of identifying which foods go well together from a flavor standpoint, often based on individual tastes, popularity, availability of ingredients, and traditional cultural practices. From a food science perspective, foods may be said to combine well with one another when they share key flavor components. One such process was trademarked as ""Foodpairing"" by the company of the same name. Examples The two pairings that are globally most commonly (possibly because them being hyperpalatable) used, cited as a response in ""your favorite food"" or ""food that you can eat every day"" surveys and seen in recipe videos, websites or books are: Meat, bread, cheese, tomatoes, onions, and at least one type of green vegetables (including in burgers, sandwiches, shawarmas, tacos and pizzas) Chicken and rice (or more generally: meat and rice or pasta, in addition some combination of tomatoes, onions, and at least one type of green vegetables) Other commonly encountered food pairings include: Bacon and cabbage Duck à l'orange Ham and eggs Hawaiian pizza Liver and onions Peanut butter and jelly Pork chops and applesauce Food science Experimenting with salty ingredients and chocolate around the year 2000, Heston Blumenthal, the chef of The Fat Duck, concluded that caviar and white chocolate were a perfect match. To find out why, he contacted a flavor scientist at Firmenich, the flavor manufacturer. By comparing the flavor analysis of both foods, they found that caviar and white chocolate had major flavor components in common. At that time, they formed the hypothesis that different foods would combine well together when they shared major flavor components, and the trademarked concept of ""Foodpairing"" was created. This Foodpairing method is asserted to aid recipe design, and it has provided new ideas for food combinations which are asserted to be theoretically sound on the basis of their flavor. It provides possib" https://en.wikipedia.org/wiki/Natural%20competence,"In microbiology, genetics, cell biology, and molecular biology, competence is the ability of a cell to alter its genetics by taking up extracellular (""naked"") DNA from its environment in the process called transformation. Competence may be differentiated between natural competence, a genetically specified ability of bacteria which is thought to occur under natural conditions as well as in the laboratory, and induced or artificial competence, which arises when cells in laboratory cultures are treated to make them transiently permeable to DNA. Competence allows for rapid adaptation and DNA repair of the cell. This article primarily deals with natural competence in bacteria, although information about artificial competence is also provided. History Natural competence was discovered by Frederick Griffith in 1928, when he showed that a preparation of killed cells of a pathogenic bacterium contained something that could transform related non-pathogenic cells into the pathogenic type. In 1944 Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrated that this 'transforming factor' was pure DNA . This was the first compelling evidence that DNA carries the genetic information of the cell. Since then, natural competence has been studied in a number of different bacteria, particularly Bacillus subtilis, Streptococcus pneumoniae (Griffith's ""pneumococcus""), Neisseria gonorrhoeae, Haemophilus influenzae and members of the Acinetobacter genus. Areas of active research include the mechanisms of DNA transport, the regulation of competence in different bacteria, and the evolutionary function of competence. Mechanisms of DNA uptake In the laboratory, DNA is provided by the researcher, often as a genetically engineered fragment or plasmid. During uptake, DNA is transported across the cell membrane(s), and the cell wall if one is present. Once the DNA is inside the cell it may be degraded to nucleotides, which are reused for DNA replication and other metabolic functions. " https://en.wikipedia.org/wiki/Constant%20fraction%20discriminator,"A constant fraction discriminator (CFD) is an electronic signal processing device, designed to mimic the mathematical operation of finding a maximum of a pulse by finding the zero of its slope. Some signals do not have a sharp maximum, but short rise times . Typical input signals for CFDs are pulses from plastic scintillation counters, such as those used for lifetime measurement in positron annihilation experiments. The scintillator pulses have identical rise times that are much longer than the desired temporal resolution. This forbids simple threshold triggering, which causes a dependence of the trigger time on the signal's peak height, an effect called time walk (see diagram). Identical rise times and peak shapes permit triggering not on a fixed threshold but on a constant fraction of the total peak height, yielding trigger times independent from peak heights. From another point of view A time-to-digital converter assigns timestamps. The time-to-digital converter needs fast rising edges with normed height. The plastic scintillation counter delivers fast rising edge with varying heights. Theoretically, the signal could be split into two parts. One part would be delayed and the other low pass filtered, inverted and then used in a variable-gain amplifier to amplify the original signal to the desired height. Practically, it is difficult to achieve a high dynamic range for the variable-gain amplifier, and analog computers have problems with the inverse value. Principle of operation The incoming signal is split into three components. One component is delayed by a time , with it may be multiplied by a small factor to put emphasis on the leading edge of the pulse and connected to the noninverting input of a comparator. One component is connected to the inverting input of this comparator. One component is connected to the noninverting input of another comparator. A threshold value is connected to the inverting input of the other comparator. The output of both compara" https://en.wikipedia.org/wiki/Staling,"Staling, or ""going stale"", is a chemical and physical process in bread and similar foods that reduces their palatability. Stale bread is dry and hard, making it suitable for different culinary uses than fresh bread. Countermeasures and destaling techniques may reduce staling. Mechanism and effects Staling is a chemical and physical process in bread and similar foods that reduces their palatability. Staling is not simply a drying-out process due to evaporation. One important mechanism is the migration of moisture from the starch granules into the interstitial spaces, degelatinizing the starch; stale bread's leathery, hard texture results from the starch amylose and amylopectin molecules realigning themselves causing recrystallisation. Stale bread Stale bread is dry and hard. Bread will stale even in a moist environment, and stales most rapidly at temperatures just above freezing. While bread that has been frozen when fresh may be thawed acceptably, bread stored in a refrigerator will have increased staling rates. Culinary uses Many classic dishes rely upon otherwise unpalatable stale bread. Examples include bread sauce, bread dumplings, and flummadiddle, an early American savoury pudding. There are also many types of bread soups such as wodzionka (in Silesian cuisine) and ribollita (in Italian cuisine). An often-sweet dish is bread pudding. Cubes of stale bread can be dipped in cheese fondue, or seasoned and baked in the oven to become croutons, suitable for scattering in salads or on top of soups. Slices of stale bread soaked in an egg and milk mixture and then fried turn into French toast (known in French as pain perdu - lost bread). In Spanish and Portuguese cuisines migas is a breakfast dish using stale bread, and in Tunisian cuisine leblebi is a soup of chickpeas and stale bread. Stale bread or breadcrumbs made from it can be used to ""stretch"" meat in dishes such as haslet (a type of meatloaf in British cuisine, or meatloaf itself) and garbure (a stew " https://en.wikipedia.org/wiki/Nucleation,"In thermodynamics, nucleation is the first step in the formation of either a new thermodynamic phase or structure via self-assembly or self-organization within a substance or mixture. Nucleation is typically defined to be the process that determines how long an observer has to wait before the new phase or self-organized structure appears. For example, if a volume of water is cooled (at atmospheric pressure) below 0°C, it will tend to freeze into ice, but volumes of water cooled only a few degrees below 0°C often stay completely free of ice for long periods (supercooling). At these conditions, nucleation of ice is either slow or does not occur at all. However, at lower temperatures nucleation is fast, and ice crystals appear after little or no delay. Nucleation is a common mechanism which generates first-order phase transitions, and it is the start of the process of forming a new thermodynamic phase. In contrast, new phases at continuous phase transitions start to form immediately. Nucleation is often very sensitive to impurities in the system. These impurities may be too small to be seen by the naked eye, but still can control the rate of nucleation. Because of this, it is often important to distinguish between heterogeneous nucleation and homogeneous nucleation. Heterogeneous nucleation occurs at nucleation sites on surfaces in the system. Homogeneous nucleation occurs away from a surface. Characteristics Nucleation is usually a stochastic (random) process, so even in two identical systems nucleation will occur at different times. A common mechanism is illustrated in the animation to the right. This shows nucleation of a new phase (shown in red) in an existing phase (white). In the existing phase microscopic fluctuations of the red phase appear and decay continuously, until an unusually large fluctuation of the new red phase is so large it is more favourable for it to grow than to shrink back to nothing. This nucleus of the red phase then grows and converts th" https://en.wikipedia.org/wiki/Tropical%20vegetation,"Tropical vegetation is any vegetation in tropical latitudes. Plant life that occurs in climates that are warm year-round is in general more biologically diverse that in other latitudes. Some tropical areas may receive abundant rain the whole year round, but others have long dry seasons which last several months and may vary in length and intensity with geographic location. These seasonal droughts have great impact on the vegetation, such as in the Madagascar spiny forests. Rainforest vegetation is categorized by five layers. The top layer being the upper tree layer. Here you will find the largest and widest trees in all the forest. These trees tend to have very large canopy's so they can be fully exposed to sunlight. A layer below that is the middle tree layer. Here you will find more compact trees and vegetation. These trees tend to be more skinny as they are trying to gain any sunlight they can. The third layer is the lower tree area. These trees tend to be around five to ten meters high and tightly compacted. The trees found in the third layer are young trees trying to grow into the larger canopy trees. The fourth layer is the shrub layer beneath the tree canopy. This layer is mainly populated by sapling trees, shrubs, and seedlings. The fifth and final layer is the herb layer which is the forest floor. The forest floor is mainly bare except for various plants, mosses, and ferns. The forest floor is much more dense than above because of little sunlight and air movement. Plant species native to the tropics found in tropical ecosystems are known as tropical plants. Some examples of tropical ecosystem are the Guinean Forests of West Africa, the Madagascar dry deciduous forests and the broadleaf forests of the Thai highlands and the El Yunque National Forest in the Puerto Rico. Description The term ""tropical vegetation"" is frequently used in the sense of lush and luxuriant, but not all the vegetation of the areas of the Earth in tropical climates can be de" https://en.wikipedia.org/wiki/Frequency%20scaling,"In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004. The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime: where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime. However, power consumption in a chip is given by the equation where P is power consumption, C is the capacitance being switched per clock cycle, V is voltage, and F is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling. The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors. See also Dynamic frequency scaling Overclocking Underclocking Voltage scaling" https://en.wikipedia.org/wiki/Asymptotic%20gain%20model,"The asymptotic gain model (also known as the Rosenstark method) is a representation of the gain of negative feedback amplifiers given by the asymptotic gain relation: where is the return ratio with the input source disabled (equal to the negative of the loop gain in the case of a single-loop system composed of unilateral blocks), G∞ is the asymptotic gain and G0 is the direct transmission term. This form for the gain can provide intuitive insight into the circuit and often is easier to derive than a direct attack on the gain. Figure 1 shows a block diagram that leads to the asymptotic gain expression. The asymptotic gain relation also can be expressed as a signal flow graph. See Figure 2. The asymptotic gain model is a special case of the extra element theorem. As follows directly from limiting cases of the gain expression, the asymptotic gain G∞ is simply the gain of the system when the return ratio approaches infinity: while the direct transmission term G0 is the gain of the system when the return ratio is zero: Advantages This model is useful because it completely characterizes feedback amplifiers, including loading effects and the bilateral properties of amplifiers and feedback networks. Often feedback amplifiers are designed such that the return ratio T is much greater than unity. In this case, and assuming the direct transmission term G0 is small (as it often is), the gain G of the system is approximately equal to the asymptotic gain G∞. The asymptotic gain is (usually) only a function of passive elements in a circuit, and can often be found by inspection. The feedback topology (series-series, series-shunt, etc.) need not be identified beforehand as the analysis is the same in all cases. Implementation Direct application of the model involves these steps: Select a dependent source in the circuit. Find the return ratio for that source. Find the gain G∞ directly from the circuit by replacing the circuit with one corresponding to T = ∞. Find the ga" https://en.wikipedia.org/wiki/Cancer%20selection,"Cancer selection can be viewed through the lens of natural selection. The animal host's body is the environment which applies the selective pressures upon cancer cells. The most fit cancer cells will have traits that will allow them to out compete other cancer cells which they are related to, but are genetically different from. This genetic diversity of cells within a tumor gives cancer an evolutionary advantage over the host's ability to inhibit and destroy tumors. Therefore, other selective pressures such as clinical treatments and pharmaceutical treatments are needed to help destroy the large amount of genetically diverse cancerous cells within a tumor. It is because of the high genetic diversity between cancer cells within a tumor that makes cancer a formidable foe for the survival of animal hosts. It has also been proposed that cancer selection is a selective force that has driven the evolution of animals. Therefore, cancer and animals have been paired as competitors in co-evolution throughout time. Natural selection Evolution, which is driven by natural selection, is the cornerstone for nearly all branches of biology including cancer biology. In 1859, Charles Darwin's book On the Origin of Species was published, in which Darwin proposed his theory of evolution by means of natural selection. Natural selection is the force that drives changes in the phenotypes observed in populations over time, and is therefore responsible for the diversity amongst all living things. It is through the pressures applied by natural selection upon individuals that leads to evolutionary change over time. Natural selection is simply the selective pressures acting upon individuals within a population due to changes in their environment which picks the traits that are best fit for the selective change. Selection and cancer These same observations that Darwin proposed for the diversity in phenotypes amongst all living things can also be applied to cancer biology to explai" https://en.wikipedia.org/wiki/Food%20technology,"Food technology is a branch of food science that addresses the production, preservation, quality control and research and development of food products. Early scientific research into food technology concentrated on food preservation. Nicolas Appert's development in 1810 of the canning process was a decisive event. The process wasn't called canning then and Appert did not really know the principle on which his process worked, but canning has had a major impact on food preservation techniques. Louis Pasteur's research on the spoilage of wine and his description of how to avoid spoilage in 1864, was an early attempt to apply scientific knowledge to food handling. Besides research into wine spoilage, Pasteur researched the production of alcohol, vinegar, wines and beer, and the souring of milk. He developed pasteurization – the process of heating milk and milk products to destroy food spoilage and disease-producing organisms. In his research into food technology, Pasteur became the pioneer into bacteriology and of modern preventive medicine. Developments Developments in food technology have contributed greatly to the food supply and have changed our world. Some of these developments are: Instantized Milk Powder – Instant milk powder has become the basis for a variety of new products that are rehydratable. This process increases the surface area of the powdered product by partially rehydrating spray-dried milk powder. Freeze-drying – The first application of freeze drying was most likely in the pharmaceutical industry; however, a successful large-scale industrial application of the process was the development of continuous freeze drying of coffee. High-Temperature Short Time Processing – These processes, for the most part, are characterized by rapid heating and cooling, holding for a short time at a relatively high temperature and filling aseptically into sterile containers. Decaffeination of Coffee and Tea – Decaffeinated coffee and tea was first developed on " https://en.wikipedia.org/wiki/Bitmain,"Bitmain Technologies Ltd., is a privately owned company headquartered in Beijing, China, that designs application-specific integrated circuit (ASIC) chips for bitcoin mining. History It was founded by Micree Zhan and Jihan Wu in 2013. Prior to founding Bitmain, Zhan was running DivaIP, a startup that allowed users to stream television to a computer screen via a set-top box, and Wu was a financial analyst and private equity fund manager. By 2018 it had become the world's largest designer of application-specific integrated circuit (ASIC) chips for bitcoin mining. The company also operates BTC.com and Antpool, historically two of the largest mining pools for bitcoin. In an effort to boost Bitcoin Cash (BCH) prices, Antpool ""burned"" 12% of the BCH they mined by sending them to irrecoverable addresses. Bitmain was reportedly profitable in early 2018, with a net profit of $742.7 million in the first half of 2018, and negative operating cash flow. TechCrunch reported that unsold inventory ballooned to one billion dollars in the second quarter of 2018. Bitmain's first product was the Antminer S1 which is an ASIC Bitcoin miner making 180 gigahashes per second (GH/s) while using 80200 watts of power. Bitmain as of 2018 had 11 mining farms operating in China. Bitmain was involved in the 2018 Bitcoin Cash split, siding with Bitcoin Cash ABC alongside Roger Ver. In December 2018 the company laid off about half of its 3000 staff. The company has since closed its offices in Israel and the Netherlands, while significantly downsizing its Texas mining operation. In February 2019, Bitmain had lost ""about $500 million"" in the third quarter of 2018. Bitmain issued a statement saying ""the rumors are not true and we will make announcements in due course."" In June 2021, suspended spot delivery of sales of machines globally aiming to support local prices following Beijing's crackdown. Bitmain's attempts at initial public offering In June 2018, Wu told Bloomberg that Bitmain was conside" https://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin%20theorem,"In applied mathematics, the Wiener–Khinchin theorem or Wiener–Khintchine theorem, also known as the Wiener–Khinchin–Einstein theorem or the Khinchin–Kolmogorov theorem, states that the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectral density of that process. History Norbert Wiener proved this theorem for the case of a deterministic function in 1930; Aleksandr Khinchin later formulated an analogous result for stationary stochastic processes and published that probabilistic analogue in 1934. Albert Einstein explained, without proofs, the idea in a brief two-page memo in 1914. The case of a continuous-time process For continuous time, the Wiener–Khinchin theorem says that if is a wide-sense-stationary random process whose autocorrelation function (sometimes called autocovariance) defined in terms of statistical expected value, exists and is finite at every lag , then there exists a monotone function in the frequency domain , or equivalently a non negative Radon measure on the frequency domain, such that where the integral is a Riemann–Stieltjes integral. The asterisk denotes complex conjugate, and can be omitted if the random process is real-valued. This is a kind of spectral decomposition of the auto-correlation function. F is called the power spectral distribution function and is a statistical distribution function. It is sometimes called the integrated spectrum. The Fourier transform of does not exist in general, because stochastic random functions are not generally either square-integrable or absolutely integrable. Nor is assumed to be absolutely integrable, so it need not have a Fourier transform either. However, if the measure is absolutely continuous, for example, if the process is purely indeterministic, then is differentiable almost everywhere and we can write . In this case, one can determine , the power spectral density of , by taking the averaged derivative of . " https://en.wikipedia.org/wiki/Refraction%20networking,"Refraction networking, also known as decoy routing, is a research anti-censorship approach that would allow users to circumvent a censor without using any individual proxy servers. Instead, it implements proxy functionality at the core of partner networks, such as those of Internet service providers, outside the censored country. These networks would discreetly provide censorship circumvention for ""any connection that passes through their networks."" This prevents censors from selectively blocking proxy servers and makes censorship more expensive, in a strategy similar to collateral freedom. The approach was independently invented by teams at the University of Michigan, the University of Illinois, and Raytheon BBN Technologies. There are five existing protocols: Telex, TapDance, Cirripede, Curveball, and Rebound. These teams are now working together to develop and deploy refraction networking with support from the U.S. Department of State. See also Domain fronting" https://en.wikipedia.org/wiki/NinKi%3A%20Urgency%20of%20Proximate%20Drawing%20Photograph,"The NinKi: Urgency of Proximate Drawing Photograph (NinKi:UoPDP) was initiated by Bangladeshi visual artist Firoz Mahmud ( ফিরোজ মাহমুদ, フィロズ・マハムド ). This is a drawing photograph project to rhetorically rescue popular icons with geometric structure drawings or make photo image of the people tactically static. His pigeonhole or kind of compartmental examples of doodling were engaged on found images in various printed media and also were found in his sketchbook, books, notebooks and often in his borrowed books. The word 'Ninki' (人気) is a Japanese word which means be Popular or popularity. The Ninki: UoPDP art Project of drawing on photographs consist of numerous archetypal images of popular celebrities in vague appearance. Their career, character, fame, obscurity, activities and character are insurgent and idiosyncratic. Artist Firoz has started on any image and then specifically on Bengal tiger and more significantly on Japanese Sumo Wrestler as artist based in Japan and fascinated by sports, media and interested on humorous aspect of entertainment industries. About The `Urgency of Proximate Drawing Photograph` (NinKi:UoPDP) is Firoz Mahmud`s one of art projects, started as anonymously. Gradually with the requests of curators and many of his friends, he started to exhibit in public spaces and major art venues. It was initially created for changing the meaning of visual images from the original photo images which Firoz took, collected or found to experiment that how general people react seeing each one's popular icons. History From the inception when Firoz Mahmud exposed these drawing photographs, he focused anonymously without using his name at billboards, undergrounds, signage board or in other exhibition venues in Japan. He created this on-going art project in Tokyo since 2008 as his leisure time drawing doodle on newspapers, magazines, and found images. NinKi: Urgency of Proximate Drawing was first exhibited at the 9th Sharjah Art Biennial in 2009 in Sharjah, UA" https://en.wikipedia.org/wiki/Fluorosilicate%20glass,"Fluorosilicate glass (FSG) is a glass material composed primarily of fluorine, silicon and oxygen. It has a number of uses in industry and manufacturing, especially in semiconductor fabrication where it forms an insulating dielectric. The related fluorosilicate glass-ceramics have good mechanical and chemical properties. Semiconductor fabrication FSG has a small relative dielectric constant (low-κ dielectric) and is used in between metal copper interconnect layers during silicon integrated circuit fabrication process. It is widely used by semiconductor fabrication plants on geometries under 0.25 microns (μ). FSG is effectively a fluorine-containing silicon dioxide (κ=3.5, while κ of undoped silicon dioxide is 3.9). FSG is used by IBM. Intel started using Cu metal layers and FSG on its 1.2 GHz Pentium processor at 130 nm complementary metal–oxide–semiconductor (CMOS). Taiwan Semiconductor Manufacturing Company (TSMC) combined FSG and copper in the Altera APEX. Fluorosilicate glass-ceramics Fluorosilicate glass-ceramics are crystalline or semi-crystalline solids formed by careful cooling of molten fluorosilicate glass. They have good mechanical properties. Potassium fluororichterite based materials are composed from tiny interlocked rod-shaped amphibole crystals; they have good resistance to chemicals and can be used in microwave ovens. Richterite glass-ceramics are used for high-performance tableware. Fluorosilicate glass-ceramics with sheet structure, derived from mica, are strong and machinable. They find a number of uses and can be used in high vacuum and as dielectrics and precision ceramic components. A number of mica and mica-fluoroapatite glass-ceramics were studied as biomaterials. See also Fluoride glass Glass Silicate" https://en.wikipedia.org/wiki/Cheating%20%28biology%29,"Cheating is a term used in behavioral ecology and ethology to describe behavior whereby organisms receive a benefit at the cost of other organisms. Cheating is common in many mutualistic and altruistic relationships. A cheater is an individual who does not cooperate (or cooperates less than their fair share) but can potentially gain the benefit from others cooperating. Cheaters are also those who selfishly use common resources to maximize their individual fitness at the expense of a group. Natural selection favors cheating, but there are mechanisms to regulate it. The stress gradient hypothesis states that facilitation, cooperation or mutualism should be more common in stressful environments, while cheating, competition or parasitism are common in benign environments (i.e nutrient excess). Theoretical models Organisms communicate and cooperate to perform a wide range of behaviors. Mutualism, or mutually beneficial interactions between species, is common in ecological systems. These interactions can be thought of ""biological markets"" in which species offer partners goods that are relatively inexpensive for them to produce and receive goods that are more expensive or even impossible for them to produce. However, these systems provide opportunities for exploitation by individuals that can obtain resources while providing nothing in return. Exploiters can take on several forms: individuals outside a mutualistic relationship who obtain a commodity in a way that confers no benefit to either mutualist, individuals who receive benefits from a partner but have lost the ability to give any in return, or individuals who have the option of behaving mutualistically towards their partners but chose not to do so. Cheaters, who do not cooperate but benefit from others who do cooperate gain a competitive edge. In an evolutionary context, this competitive edge refers to a greater ability to survive or to reproduce. If individuals who cheat are able to gain survivorship and reprod" https://en.wikipedia.org/wiki/Highly%20accelerated%20life%20test,"A highly accelerated life test (HALT) is a stress testing methodology for enhancing product reliability in which prototypes are stressed to a much higher degree than expected from actual use in order to identify weaknesses in the design or manufacture of the product. Manufacturing and research and development organizations in the electronics, computer, medical, and military industries use HALT to improve product reliability. HALT can be effectively used multiple times over a product's life time. During product development, it can find design weakness earlier in the product lifecycle when changes are much less costly to make. By finding weaknesses and making changes early, HALT can lower product development costs and compress time to market. When HALT is used at the time a product is being introduced into the market, it can expose problems caused by new manufacturing processes. When used after a product has been introduced into the market, HALT can be used to audit product reliability caused by changes in components, manufacturing processes, suppliers, etc. Overview Highly accelerated life testing (HALT) techniques are important in uncovering many of the weak links of a new product. These discovery tests rapidly find weaknesses using accelerated stress conditions. The goal of HALT is to proactively find weaknesses and fix them, thereby increasing product reliability. Because of its accelerated nature, HALT is typically faster and less expensive than traditional testing techniques. HALT is a test technique called test-to-fail, where a product is tested until failure. HALT does not help to determine or demonstrate the reliability value or failure probability in field. Many accelerated life tests are test-to-pass, meaning they are used to demonstrate the product life or reliability. It is highly recommended to perform HALT in the initial phases of product development to uncover weak links in a product, so that there is better chance and more time to modify and imp" https://en.wikipedia.org/wiki/Facilitation%20cascade,"A facilitation cascade is a sequence of ecological interactions that occur when a species benefits a second species that in turn has a positive effect on a third species. These facilitative interactions can take the form of amelioration of environmental stress and/or provision of refuge from predation. Autogenic ecosystem engineering species, structural species, habitat-forming species, and foundation species are associated with the most commonly recognized examples of facilitation cascades, sometimes referred to as a habitat cascades. Facilitation generally is a much broader concept that includes all forms of positive interactions including pollination, seed dispersal, and co-evolved commensalism and mutualistic relationships, such as between cnidarian hosts and symbiodinium in corals, and between algae and fungi in lichens. As such, facilitation cascades are widespread through all of the earth's major biomes with consistently positive effects on the abundance and biodiversity of associated organisms. Overview Facilitation cascades occur when prevalent foundation species, or less abundant but ecologically important keystone species, are involved in a hierarchy of positive interactions and consist of a primary facilitator which positively affects one or more secondary facilitators which support a suite of beneficiary species. Facilitation cascades at a minimum have a primary and secondary facilitator, although tertiary, quaternary, etc. facilitators may be found in some systems. A typical example of facilitation cascades in a tropical coastal ecosystem Origin of concept and related terms The term facilitation cascade was coined by Altieri, Silliman, and Bertness during a study on New England cobblestone beaches to explain the chain of positive interactions that allow a diverse community to exist in a habitat that is otherwise characterized by substrate instability, elevated temperatures, and desiccation stress. Cordgrass is able to establish independently, and t" https://en.wikipedia.org/wiki/List%20of%20mathematical%20series,"This list of mathematical series contains formulae for finite and infinite sums. It can be used in conjunction with other tools for evaluating sums. Here, is taken to have the value denotes the fractional part of is a Bernoulli polynomial. is a Bernoulli number, and here, is an Euler number. is the Riemann zeta function. is the gamma function. is a polygamma function. is a polylogarithm. is binomial coefficient denotes exponential of Sums of powers See Faulhaber's formula. The first few values are: See zeta constants. The first few values are: (the Basel problem) Power series Low-order polylogarithms Finite sums: , (geometric series) Infinite sums, valid for (see polylogarithm): The following is a useful property to calculate low-integer-order polylogarithms recursively in closed form: Exponential function (cf. mean of Poisson distribution) (cf. second moment of Poisson distribution) where is the Touchard polynomials. Trigonometric, inverse trigonometric, hyperbolic, and inverse hyperbolic functions relationship (versine) (haversine) Modified-factorial denominators Binomial coefficients (see ) , generating function of the Catalan numbers , generating function of the Central binomial coefficients Harmonic numbers (See harmonic numbers, themselves defined , and generalized to the real numbers) Binomial coefficients (see Multiset) (see Vandermonde identity) Trigonometric functions Sums of sines and cosines arise in Fourier series. , Rational functions An infinite series of any rational function of can be reduced to a finite series of polygamma functions, by use of partial fraction decomposition, as explained here. This fact can also be applied to finite series of rational functions, allowing the result to be computed in constant time even when the series contains a large number of terms. Exponential function (see the Landsberg–Schaar relation) Numeric series These numeric series can be found by plugging in " https://en.wikipedia.org/wiki/Flex%20links,"Flex links is a network switch feature in Cisco equipment which enables redundancy and load balancing at the layer 2 level. The feature serves as an alternative to Spanning Tree Protocol or link aggregation. A pair of layer 2 interfaces, such as switch ports or port channels, has one interface configured as a backup to the other. If the primary link fails, the backup link takes over traffic forwarding. At any point of time, only one interface will be in linkup state and actively forwarding traffic. If the primary link shuts down, the standby link takes up the duty and starts forwarding traffic and becomes the primary link. When the failing link comes back up active, it goes into standby mode and does not participate in traffic forwarding and becomes the backup link. This behaviour can be changed with pre-emption mode which makes the failed link the primary link when it becomes available again. Load balancing in Flex links work at VLAN level. Both the ports in the Flex link pair can be made to forward traffic simultaneously. One port in the flex links pair can be configured to forward traffic belonging to VLANs 1-50 and the other can forward traffic for VLANs 51-100. Mutually exclusive VLANs are load sharing the traffic between the Flex link pairs. If one of the ports fails, the other active link forwards all the traffic." https://en.wikipedia.org/wiki/Mouthfeel,"Mouthfeel refers to the physical sensations in the mouth caused by food or drink, making it distinct from taste. It is a fundamental sensory attribute which, along with taste and smell, determines the overall flavor of a food item. Mouthfeel is also sometimes referred to as texture. It is used in many areas related to the testing and evaluating of foodstuffs, such as wine-tasting and food rheology. It is evaluated from initial perception on the palate, to first bite, through chewing to swallowing and aftertaste. In wine-tasting, for example, mouthfeel is usually used with a modifier (big, sweet, tannic, chewy, etc.) to the general sensation of the wine in the mouth. Research indicates texture and mouthfeel can also influence satiety with the effect of viscosity most significant. Mouthfeel is often related to a product's water activity—hard or crisp products having lower water activities and soft products having intermediate to high water activities. Qualities perceived Chewiness: The sensation of sustained, elastic resistance from food while it is chewed. Cohesiveness: Degree to which the sample deforms before rupturing when biting with molars. Crunchiness: The audible grinding of a food when it is chewed. Density: Compactness of cross section of the sample after biting completely through with the molars. Dryness: Degree to which the sample feels dry in the mouth. Exquisiteness: Perceived quality of the item in question. Fracturability: Force with which the sample crumbles, cracks or shatters. Fracturability encompasses crumbliness, crispiness, crunchiness and brittleness. Graininess: Degree to which a sample contains small grainy particles. Gumminess: Energy required to disintegrate a semi-solid food to a state ready for swallowing. Hardness: Force required to deform the product to a given distance, i.e., force to compress between molars, bite through with incisors, compress between tongue and palate. Heaviness: Weight of product perceived when fir" https://en.wikipedia.org/wiki/Fleming%20Prize%20Lecture,"The Fleming Prize Lecture was started by the Microbiology Society in 1976 and named after Alexander Fleming, one of the founders of the society. It is for early career researchers, generally within 12 of being awarded their PhD, who have an outstanding independent research record making a distinct contribution to microbiology. Nominations can be made by any member of the society. Nominees do not have to be members. The award is £1,000 and the awardee is expected to give a lecture based on their research at the Microbiology Society's Annual Conference. List The following have been awarded this prize. 1976 Graham Gooday Biosynthesis of the Fungal Wall – Mechanisms and Implications 1977 Peter Newell Cellular Communication During Aggregation of Dictyostelium 1978 George AM Cross Immunochemical Aspects of Antigenic Variation in Trypanosomes 1979 John Beringer The Development of Rhizobium Genetics 1980 Duncan James McGeoch Structural Analysis of Animal Virus Genomes 1981 Dave Sherratt The Maintenance and Propagation of Plasmid Genes in Bacterial Populations 1982 Brian Spratt Penicillin-binding Proteins and the Future of β-Lactam Antibiotics 1983 Ray Dixon The Genetic Complexity of Nitrogen Fixation Herpes Siplex and The Herpes Complex 1984 Paul Nurse Cell Cycle Control in Yeast 1985 Jeffrey Almond Genetic Diversity in Small RNA Viruses 1986 Douglas Kell Forces, Fluxes and Control of Microbial Metabolism 1987 Christopher Higgins Molecular Mechanisms of Membrane Transport: from Microbes to Man 1988 Gordon Dougan An Oral Route to Rational Vaccination 1989 Andrew Davison Varicella-Zoster Virus 1989 Graham J Boulnois Molecular Dissection of the Host-Microbe Interaction in Infection 1990 No award 1991 Lynne Boddy The Ecology of Wood- and Litter-rotting Basidiomycete Fungi 1992 Geoffrey L Smith Vaccinia Virus Glycoproteins and Immune Evasion 1993 Neil Gow Directional Growth and Guidance Systems of Fungal Pathogens 1994 Ian Roberts Bacterial Polysaccharides in Sickness and " https://en.wikipedia.org/wiki/Software%20diversity,"Software diversity is a research field about the comprehension and engineering of diversity in the context of software. Areas The different areas of software diversity are discussed in surveys on diversity for fault-tolerance or for security. The main areas are: design diversity, n-version programming, data diversity for fault tolerance randomization software variability Techniques Code transformations It is possible to amplify software diversity through automated transformation processes that create synthetic diversity. A ""multicompiler"" is compiler embedding a diversification engine. A multi-variant execution environment (MVEE) is responsible for selecting the variant to execute and compare the output. Fred Cohen was among the very early promoters of such an approach. He proposed a series of rewriting and code reordering transformations that aim at producing massive quantities of different versions of operating systems functions. These ideas have been developed over the years and have led to the construction of integrated obfuscation schemes to protect key functions in large software systems. Another approach to increase software diversity of protection consists in adding randomness in certain core processes, such as memory loading. Randomness implies that all versions of the same program run differently from each other, which in turn creates a diversity of program behaviors. This idea was initially proposed and experimented by Stephanie Forrest and her colleagues. Recent work on automatic software diversity explores different forms of program transformations that slightly vary the behavior of programs. The goal is to evolve one program into a population of diverse programs that all provide similar services to users, but with a different code. This diversity of code enhances the protection of users against one single attack that could crash all programs at the same time. Transformation operators include: code layout randomization: reorder functions" https://en.wikipedia.org/wiki/Resistor%E2%80%93transistor%20logic,"Resistor–transistor logic (RTL), sometimes also known as transistor–resistor logic (TRL), is a class of digital circuits built using resistors as the input network and bipolar junction transistors (BJTs) as switching devices. RTL is the earliest class of transistorized digital logic circuit; it was succeeded by diode–transistor logic (DTL) and transistor–transistor logic (TTL). RTL circuits were first constructed with discrete components, but in 1961 it became the first digital logic family to be produced as a monolithic integrated circuit. RTL integrated circuits were used in the Apollo Guidance Computer, whose design began in 1961 and which first flew in 1966. Implementation RTL inverter A bipolar transistor switch is the simplest RTL gate (inverter or NOT gate) implementing logical negation. It consists of a common-emitter stage with a base resistor connected between the base and the input voltage source. The role of the base resistor is to expand the very small transistor input voltage range (about 0.7 V) to the logical ""1"" level (about 3.5 V) by converting the input voltage into current. Its resistance is settled by a compromise: it is chosen low enough to saturate the transistor and high enough to obtain high input resistance. The role of the collector resistor is to convert the collector current into voltage; its resistance is chosen high enough to saturate the transistor and low enough to obtain low output resistance (high fan-out). One-transistor RTL NOR gate With two or more base resistors (R3 and R4) instead of one, the inverter becomes a two-input RTL NOR gate (see the figure on the right). The logical operation OR is performed by applying consecutively the two arithmetic operations addition and comparison (the input resistor network acts as a parallel voltage summer with equally weighted inputs and the following common-emitter transistor stage as a voltage comparator with a threshold about 0.7 V). The equivalent resistance of all the resistors " https://en.wikipedia.org/wiki/Branches%20of%20physics,"Physics is a scientific discipline that seeks to construct and experimentally test theories of the physical universe. These theories vary in their scope and can be organized into several distinct branches, which are outlined in this article. Classical mechanics Classical mechanics is a model of the physics of forces acting upon bodies; includes sub-fields to describe the behaviors of solids, gases, and fluids. It is often referred to as ""Newtonian mechanics"" after Isaac Newton and his laws of motion. It also includes the classical approach as given by Hamiltonian and Lagrange methods. It deals with the motion of particles and the general system of particles. There are many branches of classical mechanics, such as: statics, dynamics, kinematics, continuum mechanics (which includes fluid mechanics), statistical mechanics, etc. Mechanics: A branch of physics in which we study the object and properties of an object in form of a motion under the action of the force. Thermodynamics and statistical mechanics The first chapter of The Feynman Lectures on Physics is about the existence of atoms, which Feynman considered to be the most compact statement of physics, from which science could easily result even if all other knowledge was lost. By modeling matter as collections of hard spheres, it is possible to describe the kinetic theory of gases, upon which classical thermodynamics is based. Thermodynamics studies the effects of changes in temperature, pressure, and volume on physical systems on the macroscopic scale, and the transfer of energy as heat. Historically, thermodynamics developed out of the desire to increase the efficiency of early steam engines. The starting point for most thermodynamic considerations is the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work. They also postulate the existence of a quantity named entropy, which can be defined for any system. In thermodynamics, interactions between la" https://en.wikipedia.org/wiki/Counting%20board,"The counting board is the precursor of the abacus, and the earliest known form of a counting device (excluding fingers and other very simple methods). Counting boards were made of stone or wood, and the counting was done on the board with beads, pebbles etc. Not many boards survive because of the perishable materials used in their construction, or the impossibility to identify the object as a counting board.The counting board was invented to facilitate and streamline numerical calculations in ancient civilizations. Its inception addressed the need for a practical tool to perform arithmetic operations efficiently. By using counters or tokens on a board with designated sections, people could easily keep track of quantities, trade, and financial transactions. This invention not only enhanced accuracy but also fueled the development of more sophisticated mathematical concepts and systems throughout history. The counting board does not include a zero as we have come to understand it today. It primarily used Roman Numerals to calculate. The system was based on a base ten or base twenty system, where the lines represented the bases of ten or twenty, and the spaces representing base fives. The oldest known counting board, the Salamis Tablet () was discovered on the Greek island of Salamis in 1899. It is thought to have been used as more of a gaming board than a calculating device. It is marble, about 150 x 75 x 4.5 cm, and is in the Epigraphical Museum in Athens. It has carved Greek letters and parallel grooves. The German mathematician Adam Ries described the use of counting boards in . See also Abacus Calculator" https://en.wikipedia.org/wiki/Fermentation%20in%20food%20processing,"In food processing, fermentation is the conversion of carbohydrates to alcohol or organic acids using microorganisms—yeasts or bacteria—under anaerobic (oxygen-free) conditions. Fermentation usually implies that the action of microorganisms is desired. The science of fermentation is known as zymology or zymurgy. The term ""fermentation"" sometimes refers specifically to the chemical conversion of sugars into ethanol, producing alcoholic drinks such as wine, beer, and cider. However, similar processes take place in the leavening of bread (CO2 produced by yeast activity), and in the preservation of sour foods with the production of lactic acid, such as in sauerkraut and yogurt. Other widely consumed fermented foods include vinegar, olives, and cheese. More localised foods prepared by fermentation may also be based on beans, grain, vegetables, fruit, honey, dairy products, and fish. History and prehistory Brewing and winemaking Natural fermentation precedes human history. Since ancient times, humans have exploited the fermentation process. The earliest archaeological evidence of fermentation is 13,000-year-old residues of a beer, with the consistency of gruel, found in a cave near Haifa in Israel. Another early alcoholic drink, made from fruit, rice, and honey, dates from 7000 to 6600 BC, in the Neolithic Chinese village of Jiahu, and winemaking dates from ca. 6000 BC, in Georgia, in the Caucasus area. Seven-thousand-year-old jars containing the remains of wine, now on display at the University of Pennsylvania, were excavated in the Zagros Mountains in Iran. There is strong evidence that people were fermenting alcoholic drinks in Babylon ca. 3000 BC, ancient Egypt ca. 3150 BC, pre-Hispanic Mexico ca. 2000 BC, and Sudan ca. 1500 BC. Discovery of the role of yeast The French chemist Louis Pasteur founded zymology, when in 1856 he connected yeast to fermentation. When studying the fermentation of sugar to alcohol by yeast, Pasteur concluded that the fermentation wa" https://en.wikipedia.org/wiki/Photokinesis,"Photokinesis is a change in the velocity of movement of an organism as a result of changes in light intensity. The alteration in speed is independent of the direction from which the light is shining. Photokinesis is described as positive if the velocity of travel is greater with an increase in light intensity and negative if the velocity is slower. If a group of organisms with a positive photokinetic response is swimming in a partially shaded environment, there will be fewer organisms per unit of volume in the sunlit portion than in the shaded parts. This may be beneficial for the organisms if it is unfavourable to their predators, or it may be propitious to them in their quest for prey. In photosynthetic prokaryotes, the mechanism for photokinesis appears to be an energetic process. In cyanobacteria, for example, an increase in illumination results in an increase of photophosphorylation which enables an increase in metabolic activity. However the behaviour is also found among eukaryotic microorganisms, including those like Astasia longa which are not photosynthetic, and in these, the mechanism is not fully understood. In Euglena gracilis, the rate of swimming has been shown to speed up with increased light intensity until the light reaches a certain saturation level, beyond which the swimming rate declines. The sea slug Discodoris boholiensis also displays positive photokinesis; it is nocturnal and moves slowly at night, but much faster when caught in the open during daylight hours. Moving faster in the exposed environment should reduce predation and enable it to conceal itself as soon as possible, but its brain is quite incapable of working this out. Photokinesis is common in tunicate larvae, which accumulate in areas with low light intensity just before settlement, and the behaviour is also present in juvenile fish such as sockeye salmon smolts. See also Kinesis (biology) Phototaxis Phototropism" https://en.wikipedia.org/wiki/Short-time%20Fourier%20transform,"The short-time Fourier transform (STFT), is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers. Forward STFT Continuous-time STFT Simply, in the continuous-time case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform (a one-dimensional function) of the resulting signal is taken, then the window is slid along the time axis until the end resulting in a two-dimensional representation of the signal. Mathematically, this is written as: where is the window function, commonly a Hann window or Gaussian window centered around zero, and is the signal to be transformed (note the difference between the window function and the frequency ). is essentially the Fourier transform of , a complex function representing the phase and magnitude of the signal over time and frequency. Often phase unwrapping is employed along either or both the time axis, , and frequency axis, , to suppress any jump discontinuity of the phase result of the STFT. The time index is normally considered to be ""slow"" time and usually not expressed in as high resolution as time . Given that the STFT is essentially a Fourier transform times a window function, the STFT is also called windowed Fourier transform or time-dependent Fourier transform. Disc" https://en.wikipedia.org/wiki/Fabric%20computing,"Fabric computing or unified computing involves constructing a computing fabric consisting of interconnected nodes that look like a weave or a fabric when seen collectively from a distance. Usually the phrase refers to a consolidated high-performance computing system consisting of loosely coupled storage, networking and parallel processing functions linked by high bandwidth interconnects (such as 10 Gigabit Ethernet and InfiniBand) but the term has also been used to describe platforms such as the Azure Services Platform and grid computing in general (where the common theme is interconnected nodes that appear as a single logical unit). The fundamental components of fabrics are ""nodes"" (processor(s), memory, and/or peripherals) and ""links"" (functional connections between nodes). While the term ""fabric"" has also been used in association with storage area networks and with switched fabric networking, the introduction of compute resources provides a complete ""unified"" computing system. Other terms used to describe such fabrics include ""unified fabric"", ""data center fabric"" and ""unified data center fabric"". Ian Foster, director of the Computation Institute at the Argonne National Laboratory and University of Chicago suggested in 2007 that grid computing ""fabrics"" were ""poised to become the underpinning for next-generation enterprise IT architectures and be used by a much greater part of many organizations"". History While the term has been in use since the mid to late 1990s the growth of cloud computing and Cisco's evangelism of unified data center fabrics followed by unified computing (an evolutionary data center architecture whereby blade servers are integrated or unified with supporting network and storage infrastructure) starting March 2009 has renewed interest in the technology. There have been mixed reactions to Cisco's architecture, particularly from rivals who claim that these proprietary systems will lock out other vendors. Analysts claim that this ""ambitiou" https://en.wikipedia.org/wiki/Ace%20Stream,"Ace Stream is a peer-to-peer multimedia streaming protocol, built using BitTorrent technology. Ace Stream has been recognized by sources as a potential method for broadcasting and viewing bootlegged live video streams. The protocol functions as both a client and a server. When users stream a video feed using Ace Stream, they are simultaneously downloading from peers and uploading the same video to other peers. History Ace Stream began under the name TorrentStream as a pilot project to use BitTorrent technology to stream live video. In 2013 TorrentStream, was re-released under the name ACE Stream." https://en.wikipedia.org/wiki/Volyn%20biota,"The Volyn biota are fossilized microorganisms found in rock samples from miarolitic cavities of igneous rocks collected in Zhytomyr Oblast, Ukraine. It is within the historical region of Volyn, hence the name of the find. Exceptionally well-preserved, they were dated to 1.5 Ga, within the ""Boring Billion"" period of the Proterozoic geological eon. History of the discovery The samples of Volyn biota were found in samples from miarolitic pegmatites (""chamber pegmatites"") collected from the of the Ukrainian Shield. They were described as early as in 1987, but interpreted as abiogenic formations. In 2000, these formations were reinterpreted as the fossilized cyanobacteria from geyser-type deposits. Until very recently the origin of the Korosten pegmatites was not fully understood, but they were dated to 1.8-1.7 Ga. Franz et al. (2022, 2023), investigating newly recovered samples they date to 1.5 Ga, described the morphology and the internal structure of Volyn biota and reported the presence of different types of filaments, of varying diameters, shapes and branching in the studied organisms, and provided evidence of the presence of fungi-like organisms and Precambrian continental deep biosphere. Some fossils give evidence of sessility, while others of free-living lifestyle. Usually Precambrian fossils are not well preserved, but the Volyn biota had exceptional conditions for fossilization in cavities with silicon tetrafluoride-rich fluids. The cavities also preserved them from further diagenetic-metamorphic overprint. Volyn biota is an additional support of the claim that filamentous fossils dated to 2.4 Ga from the Ongeluk Formation (Griqualand West, South Africa) were also fungi-like organisms." https://en.wikipedia.org/wiki/Canonical%20map,"In mathematics, a canonical map, also called a natural map, is a map or morphism between objects that arises naturally from the definition or the construction of the objects. Often, it is a map which preserves the widest amount of structure. A choice of a canonical map sometimes depends on a convention (e.g., a sign convention). A closely related notion is a structure map or structure morphism; the map or morphism that comes with the given structure on the object. These are also sometimes called canonical maps. A canonical isomorphism is a canonical map that is also an isomorphism (i.e., invertible). In some contexts, it might be necessary to address an issue of choices of canonical maps or canonical isomorphisms; for a typical example, see prestack. For a discussion of the problem of defining a canonical map see Kevin Buzzard's talk at the 2022 Grothendieck conference. Examples If N is a normal subgroup of a group G, then there is a canonical surjective group homomorphism from G to the quotient group G/N, that sends an element g to the coset determined by g. If I is an ideal of a ring R, then there is a canonical surjective ring homomorphism from R onto the quotient ring R/I, that sends an element r to its coset I+r. If V is a vector space, then there is a canonical map from V to the second dual space of V, that sends a vector v to the linear functional fv defined by fv(λ) = λ(v). If is a homomorphism between commutative rings, then S can be viewed as an algebra over R. The ring homomorphism f is then called the structure map (for the algebra structure). The corresponding map on the prime spectra is also called the structure map. If E is a vector bundle over a topological space X, then the projection map from E to X is the structure map. In topology, a canonical map is a function f mapping a set X → X/R (X modulo R), where R is an equivalence relation on X, that takes each x in X to the equivalence class [x] modulo R." https://en.wikipedia.org/wiki/Biot%E2%80%93Tolstoy%E2%80%93Medwin%20diffraction%20model,"In applied mathematics, the Biot–Tolstoy–Medwin (BTM) diffraction model describes edge diffraction. Unlike the uniform theory of diffraction (UTD), BTM does not make the high frequency assumption (in which edge lengths and distances from source and receiver are much larger than the wavelength). BTM sees use in acoustic simulations. Impulse response The impulse response according to BTM is given as follows: The general expression for sound pressure is given by the convolution integral where represents the source signal, and represents the impulse response at the receiver position. The BTM gives the latter in terms of the source position in cylindrical coordinates where the -axis is considered to lie on the edge and is measured from one of the faces of the wedge. the receiver position the (outer) wedge angle and from this the wedge index the speed of sound as an integral over edge positions where the summation is over the four possible choices of the two signs, and are the distances from the point to the source and receiver respectively, and is the Dirac delta function. where See also Uniform theory of diffraction Notes" https://en.wikipedia.org/wiki/List%20of%20conversion%20factors,"This article gives a list of conversion factors for several physical quantities. A number of different units (some only of historical interest) are shown and expressed in terms of the corresponding SI unit. Conversions between units in the metric system are defined by their prefixes (for example, 1 kilogram = 1000 grams, 1 milligram = 0.001 grams) and are thus not listed in this article. Exceptions are made if the unit is commonly known by another name (for example, 1 micron = 10−6 metre). Within each table, the units are listed alphabetically, and the SI units (base or derived) are highlighted. The following quantities are considered: length, area, volume, plane angle, solid angle, mass, density, time, frequency, velocity, volumetric flow rate, acceleration, force, pressure (or mechanical stress), torque (or moment of force), energy, power (or heat flow rate), action, dynamic viscosity, kinematic viscosity, electric current, electric charge, electric dipole, electromotive force (or electric potential difference), electrical resistance, capacitance, magnetic flux, magnetic flux density, inductance, temperature, information entropy, luminous intensity, luminance, luminous flux, illuminance, radiation. Length Area Volume Plane angle Solid angle Mass Notes: See Weight for detail of mass/weight distinction and conversion. Avoirdupois is a system of mass based on a pound of 16 ounces, while Troy weight is the system of mass where 12 troy ounces equals one troy pound. The symbol is used to denote standard gravity in order to avoid confusion with the (upright) g symbol for gram. Density Time Frequency Speed or velocity A velocity consists of a speed combined with a direction; the speed part of the velocity takes units of speed. Flow (volume) Acceleration Force Pressure or mechanical stress Torque or moment of force Energy Power or heat flow rate Action Dynamic viscosity Kinematic viscosity Electric current Electric charge Electric dipole Elec" https://en.wikipedia.org/wiki/Thrifty%20phenotype,"Thrifty phenotype refers to the correlation between low birth weight of neonates and the increased risk of developing metabolic syndromes later in life, including type 2 diabetes and cardiovascular diseases. Although early life undernutrition is thought to be the key driving factor to the hypothesis, other environmental factors have been explored for their role in susceptibility, such as physical inactivity. Genes may also play a role in susceptibility of these diseases, as they may make individuals predisposed to factors that lead to increased disease risk. Historical overview The term thrifty phenotype was first coined by Charles Nicholas Hales and David Barker in a study published in 1992. In their study, the authors reviewed the literature up to and addressed five central questions regarding role of different factors in type 2 diabetes on which they based their hypothesis. These questions included the following: The role of beta cell deficiency in type 2 diabetes. The extent to which beta cell deficiency contributes to insulin intolerance. The role of major nutritional elements in fetal growth. The role of abnormal amino acid supply in growth limited neonates. The role of malnutrition in irreversibly defective beta cell growth. From the review of the existing literature, they posited that poor nutritional status in fetal and early neonatal stages could hamper the development and proper functioning of the pancreatic beta cells by impacting structural features of islet anatomy, which could consequently make the individual more susceptible to the development of type 2 diabetes in later life. However, they did not exclude other causal factors such as obesity, ageing and physical inactivity as determining factors of type 2 diabetes. In a later study, Barker et al. analyzed living patient data from Hertfordshire, UK, and found that men in their sixties having low birthweight (2.95 kg or less) were 10 times more likely to develop syndrome X (type 2 diabetes, " https://en.wikipedia.org/wiki/Warazan,"was a system of record-keeping using knotted straw at the time of the Ryūkyū Kingdom. In the dialect of the Sakishima Islands it was known as barasan and on Okinawa Island as warazani or warazai. Formerly used in particular in relation to the ""head tax"", it is still to be found in connection with the annual , to record the amount of miki or sacred sake dedicated. See also Kaidā glyphs Naha Tug-of-war Quipu" https://en.wikipedia.org/wiki/Omega%20constant,"The omega constant is a mathematical constant defined as the unique real number that satisfies the equation It is the value of , where is Lambert's function. The name is derived from the alternate name for Lambert's function, the omega function. The numerical value of is given by . . Properties Fixed point representation The defining identity can be expressed, for example, as or as well as Computation One can calculate iteratively, by starting with an initial guess , and considering the sequence This sequence will converge to as approaches infinity. This is because is an attractive fixed point of the function . It is much more efficient to use the iteration because the function in addition to having the same fixed point, also has a derivative that vanishes there. This guarantees quadratic convergence; that is, the number of correct digits is roughly doubled with each iteration. Using Halley's method, can be approximated with cubic convergence (the number of correct digits is roughly tripled with each iteration): (see also ). Integral representations An identity due to Victor Adamchik is given by the relationship Other relations due to Mező and Kalugin-Jeffrey-Corless are: The latter two identities can be extended to other values of the function (see also ). Transcendence The constant is transcendental. This can be seen as a direct consequence of the Lindemann–Weierstrass theorem. For a contradiction, suppose that is algebraic. By the theorem, is transcendental, but , which is a contradiction. Therefore, it must be transcendental." https://en.wikipedia.org/wiki/Call%20setup,"In telecommunication, call setup is the process of establishing a virtual circuit across a telecommunications network. Call setup is typically accomplished using a signaling protocol. The term call set-up time has the following meanings: The overall length of time required to establish a circuit-switched call between users. For data communication, the overall length of time required to establish a circuit-switched call between terminals; i.e., the time from the initiation of a call request to the beginning of the call message. Note: Call set-up time is the summation of: (a) call request time—the time from initiation of a calling signal to the delivery to the caller of a proceed-to-select signal; (b) selection time—the time from the delivery of the proceed-to-select signal until all the selection signals have been transmitted; and (c) post selection time—the time from the end of the transmission of the selection signals until the delivery of the call-connected signal to the originating terminal. Success rate In telecommunications, the call setup success rate (CSSR) is the fraction of the attempts to make a call that result in a connection to the dialled number (due to various reasons not all call attempts end with a connection to the dialled number). This fraction is usually measured as a percentage of all call attempts made. In telecommunications a call attempt invokes a call setup procedure, which, if successful, results in a connected call. A call setup procedure may fail due to a number of technical reasons. Such calls are classified as failed call attempts. In many practical cases, this definition needs to be further expanded with a number of detailed specifications describing which calls exactly are counted as successfully set up and which not. This is determined to a great degree by the stage of the call setup procedure at which a call is counted as connected. In modern communications systems, such as cellular (mobile) networks, the call setup procedu" https://en.wikipedia.org/wiki/List%20of%20prime%20numbers,"This is a list of articles about prime numbers. A prime number (or prime) is a natural number greater than 1 that has no positive divisors other than 1 and itself. By Euclid's theorem, there are an infinite number of prime numbers. Subsets of the prime numbers may be generated with various formulas for primes. The first 1000 primes are listed below, followed by lists of notable types of prime numbers in alphabetical order, giving their respective first terms. 1 is neither prime nor composite. The first 1000 prime numbers The following table lists the first 1000 primes, with 20 columns of consecutive primes in each of the 50 rows. . The Goldbach conjecture verification project reports that it has computed all primes below 4×10. That means 95,676,260,903,887,607 primes (nearly 10), but they were not stored. There are known formulae to evaluate the prime-counting function (the number of primes below a given value) faster than computing the primes. This has been used to compute that there are 1,925,320,391,606,803,968,923 primes (roughly 2) below 10. A different computation found that there are 18,435,599,767,349,200,867,866 primes (roughly 2) below 10, if the Riemann hypothesis is true. Lists of primes by type Below are listed the first prime numbers of many named forms and types. More details are in the article for the name. n is a natural number (including 0) in the definitions. Balanced primes Primes with equal-sized prime gaps above and below them, so that they are equal to the arithmetic mean of the nearest primes above and below. 5, 53, 157, 173, 211, 257, 263, 373, 563, 593, 607, 653, 733, 947, 977, 1103, 1123, 1187, 1223, 1367, 1511, 1747, 1753, 1907, 2287, 2417, 2677, 2903, 2963, 3307, 3313, 3637, 3733, 4013, 4409, 4457, 4597, 4657, 4691, 4993, 5107, 5113, 5303, 5387, 5393 (). Bell primes Primes that are the number of partitions of a set with n members. 2, 5, 877, 27644437, 35742549198872617291353508656626642567, 3593340859686228310419601885980" https://en.wikipedia.org/wiki/%CE%94P,"ΔP (Delta P) is a mathematical term symbolizing a change (Δ) in pressure (P). Uses Young–Laplace equation Darcy–Weisbach equation Given that the head loss hf expresses the pressure loss Δp as the height of a column of fluid, where ρ is the density of the fluid. The Darcy–Weisbach equation can also be written in terms of pressure loss: Lung compliance In general, compliance is defined by the change in volume (ΔV) versus the associated change in pressure (ΔP), or ΔV/ΔP: During mechanical ventilation, compliance is influenced by three main physiologic factors: Lung compliance Chest wall compliance Airway resistance Lung compliance is influenced by a variety of primary abnormalities of lung parenchyma, both chronic and acute. Airway resistance is typically increased by bronchospasm and airway secretions. Chest wall compliance can be decreased by fixed abnormalities (e.g. kyphoscoliosis, morbid obesity) or more variable problems driven by patient agitation while intubated. Calculating compliance on minute volume (VE: ΔV is always defined by tidal volume (VT), but ΔP is different for the measurement of dynamic vs. static compliance. Dynamic compliance (Cdyn) where PIP = peak inspiratory pressure (the maximum pressure during inspiration), and PEEP = positive end expiratory pressure. Alterations in airway resistance, lung compliance and chest wall compliance influence Cdyn. Static compliance (Cstat) where Pplat = plateau pressure. Pplat is measured at the end of inhalation and prior to exhalation using an inspiratory hold maneuver. During this maneuver, airflow is transiently (~0.5 sec) discontinued, which eliminates the effects of airway resistance. Pplat is never > PIP and is typically < 3-5 cmH2O lower than PIP when airway resistance is normal. See also Pressure measurement Pressure drop Head loss" https://en.wikipedia.org/wiki/List%20of%20graph%20theory%20topics,"This is a list of graph theory topics, by Wikipedia page. See glossary of graph theory terms for basic terminology Examples and types of graphs Graph coloring Paths and cycles Trees Terminology Node Child node Parent node Leaf node Root node Root (graph theory) Operations Tree structure Tree data structure Cayley's formula Kőnig's lemma Tree (set theory) (need not be a tree in the graph-theory sense, because there may not be a unique path between two vertices) Tree (descriptive set theory) Euler tour technique Graph limits Graphon Graphs in logic Conceptual graph Entitative graph Existential graph Laws of Form Logical graph Mazes and labyrinths Labyrinth Maze Maze generation algorithm Algorithms Ant colony algorithm Breadth-first search Depth-first search Depth-limited search FKT algorithm Flood fill Graph exploration algorithm Matching (graph theory) Max flow min cut theorem Maximum-cardinality search Shortest path Dijkstra's algorithm Bellman–Ford algorithm A* algorithm Floyd–Warshall algorithm Topological sorting Pre-topological order Other topics Networks, network theory See list of network theory topics Hypergraphs Helly family Intersection (Line) Graphs of hypergraphs Graph theory Graph theory Graph theory" https://en.wikipedia.org/wiki/Mathematical%20maturity,"In mathematics, mathematical maturity is an informal term often used to refer to the quality of having a general understanding and mastery of the way mathematicians operate and communicate. It pertains to a mixture of mathematical experience and insight that cannot be directly taught. Instead, it comes from repeated exposure to mathematical concepts. It is a gauge of mathematics students' erudition in mathematical structures and methods, and can overlap with other related concepts such as mathematical intuition and mathematical competence. The topic is occasionally also addressed in literature in its own right. Definitions Mathematical maturity has been defined in several different ways by various authors, and is often tied to other related concepts such as comfort and competence with mathematics, mathematical intuition and mathematical beliefs. One definition has been given as follows: A broader list of characteristics of mathematical maturity has been given as follows: Finally, mathematical maturity has also been defined as an ability to do the following: It is sometimes said that the development of mathematical maturity requires a deep reflection on the subject matter for a prolonged period of time, along with a guiding spirit which encourages exploration. Progression Mathematician Terence Tao has proposed a three-stage model of mathematics education that can be interpreted as a general framework of mathematical maturity progression. The stages are summarized in the following table: See also Logical intuition Four stages of competence" https://en.wikipedia.org/wiki/Injury,"Injury is physiological damage to the living tissue of any organism, whether in humans, in other animals, or in plants. Injuries can be caused in many ways, such as mechanically with penetration by sharp objects such as teeth or with blunt objects, by heat or cold, or by venoms and biotoxins. Injury prompts an inflammatory response in many taxa of animals; this prompts wound healing. In both plants and animals, substances are often released to help to occlude the wound, limiting loss of fluids and the entry of pathogens such as bacteria. Many organisms secrete antimicrobial chemicals which limit wound infection; in addition, animals have a variety of immune responses for the same purpose. Both plants and animals have regrowth mechanisms which may result in complete or partial healing over the injury. Taxonomic range Animals Injury in animals is sometimes defined as mechanical damage to anatomical structure, but it has a wider connotation of physical damage with any cause, including drowning, burns, and poisoning. Such damage may result from attempted predation, territorial fights, falls, and abiotic factors. Injury prompts an inflammatory response in animals of many different phyla; this prompts coagulation of the blood or body fluid, followed by wound healing, which may be rapid, as in the cnidaria. Arthropods are able to repair injuries to the cuticle that forms their exoskeleton to some extent. Animals in several phyla, including annelids, arthropods, cnidaria, molluscs, nematodes, and vertebrates are able to produce antimicrobial peptides to fight off infection following an injury. Humans Injury in humans has been studied extensively for its importance in medicine. Much of medical practice including emergency medicine and pain management is dedicated to the treatment of injuries. The World Health Organization has developed a classification of injuries in humans by categories including mechanism, objects/substances producing injury, place of occurrence, " https://en.wikipedia.org/wiki/Mining%20software%20repositories,"Within software engineering, the mining software repositories (MSR) field analyzes the rich data available in software repositories, such as version control repositories, mailing list archives, bug tracking systems, issue tracking systems, etc. to uncover interesting and actionable information about software systems, projects and software engineering. Definition Herzig and Zeller define ”mining software archives” as a process to ”obtain lots of initial evidence” by extracting data from software repositories. Further they define ”data sources” as product-based artifacts like source code, requirement artefacts or version archives and claim that these sources are unbiased, but noisy and incomplete. Techniques Coupled Change Analysis The idea in coupled change analysis is that developers change code entities (e.g. files) together frequently for fixing defects or introducing new features. These couplings between the entities are often not made explicit in the code or other documents. Especially developers new on the project do not know which entities need to be changed together. Coupled change analysis aims to extract the coupling out of the version control system for a project. By the commits and the timing of changes, we might be able to identify which entities frequently change together. This information could then be presented to developers about to change one of the entities to support them in their further changes. Commit Analysis There are many different kinds of commits in version control systems, e.g. bug fix commits, new feature commits, documentation commits, etc. To take data-driven decisions based on past commits, one needs to select subsets of commits that meet a given criterion. That can be done based on the commit message. Documentation generation It is possible to generate useful documentation from mining software repositories. For instance, Jadeite computes usage statistics and helps newcomers to quickly identify commonly used classes. Data" https://en.wikipedia.org/wiki/Food%20grading,"Food grading involves the inspection, assessment and sorting of various foods regarding quality, freshness, legal conformity and market value. Food grading is often done by hand, in which foods are assessed and sorted. Machinery is also used to grade foods, and may involve sorting products by size, shape and quality. For example, machinery can be used to remove spoiled food from fresh product. By food type Beef Beef grading in the United States is performed by the United States Department of Agriculture's (USDA) Agricultural and Marketing Service. There are eight beef quality grades, with U.S. Prime being the highest grade and U.S. Canner being the lowest grade. Beef grading is a complex process. Beer In beer grading, the letter ""X"" is used on some beers, and was traditionally a mark of beer strength, with the more Xs the greater the strength. Some sources suggest that the origin of the mark was in the breweries of medieval monasteries Another plausible explanation is contained in a treatise entitled ""The Art of Brewing"" published in London in 1829. It says; ""The duties on ale and beer, which were first imposed in 1643... at a certain period, in distinguishing between small beer and strong, all ale or beer, sold at or above ten shillings per barrel, was reckoned to be strong ''and was, therefore, subjected to a higher duty. The cask which contained this strong beer was then first marked with an X signifying ten; and hence the present quack-like denominations of XX (double X) and XXX (treble X) on the casks and accounts of the strong-ale brewers"". In mid-19th century England, the use of ""X"" and other letters had evolved into a standardised grading system for the strength of beer. Today, it is used as a trade mark by a number of brewers in the United Kingdom, the Commonwealth and the United States. European Bitterness Units scale, often abbreviated as EBU, is a scale for measuring the perceived bitterness of beer, with lower values being generally ""less bitter""" https://en.wikipedia.org/wiki/Impulse%20generator,"An impulse generator is an electrical apparatus which produces very short high-voltage or high-current surges. Such devices can be classified into two types: impulse voltage generators and impulse current generators. High impulse voltages are used to test the strength of electric power equipment against lightning and switching surges. Also, steep-front impulse voltages are sometimes used in nuclear physics experiments. High impulse currents are needed not only for tests on equipment such as lightning arresters and fuses but also for many other technical applications such as lasers, thermonuclear fusion, and plasma devices. Jedlik's tubular voltage generator In 1863 Hungarian physicist Ányos Jedlik discovered the possibility of voltage multiplication and in 1868 demonstrated it with a ""tubular voltage generator"", which was successfully displayed at the Vienna World Exposition in 1873. It was an early form of the impulse generators now applied in nuclear research. The jury of the World Exhibition of 1873 in Vienna awarded his voltage multiplying condenser of cascade connection with prize ""For Development"". Through this condenser, Jedlik framed the principle of surge generator of cascaded connection. (The Cascade connection was another important invention of Ányos Jedlik.) Marx generator One form is the Marx generator, named after Erwin Otto Marx, who first proposed it in 1923. This consists of multiple capacitors that are first charged in parallel through charging resistors as by a high-voltage, direct-current source and then connected in series and discharged through a test object by a simultaneous spark-over of the spark gaps. The impulse current generator comprises many capacitors that are also charged in parallel by a high-voltage, low-current, direct-current source, but it is discharged in parallel through resistances, inductances, and a test object by a spark gap. See also Pulsed power Pulse-forming network Marx generator Cockcroft–Walton generator " https://en.wikipedia.org/wiki/Lanstar,"LANStar (Lanstar) was a 2.56 Mbit/s twisted-pair local area network created by Northern Telecom in the mid '80s. Because NT's PBX systems already owned a building's twisted pair plant (for voice), it made sense to use the same wiring for data as well. LANStar was originally to be a component of NT's PTE (Packet Transport Equipment) product, which was a sort of minicomputer arrangement with dumb (VT220) terminals on the desktop and the CPUs in an intelligent rack (the PTE) in the PBX room (alongside the PBX). The PTE was to have several basic office automation apps: word processing, database, etc. Just as NT was doing Beta testing of the PTE, PCs and PC networking took off, effectively killing the PTE before it completed Beta. Given the investment already sunk into the product, NT attempted to repackage the PTE as a small (dorm-room-refrigerator sized) cabinet (the PTE-S, 'S' for 'small') containing only LANStar controllers and supporting up to 112 nodes. LANStar had cards for the PC/XT, PC/AT and MacII and supported NetBIOS, Banyan, Novell, and AppleTalk. LANStar was discontinued in 1990. The name ""LANStar"" was coined by NT Product Marketing manager Paul Masters: he heard of AT&T's proposed StarLAN product and created a similar name in order to piggyback on all the publicity surrounding AT&T's product. See also Meridian Mail - The voicemail system that also used the PTE" https://en.wikipedia.org/wiki/Game%20without%20a%20value,"In the mathematical theory of games, in particular the study of zero-sum continuous games, not every game has a minimax value. This is the expected value to one of the players when both play a perfect strategy (which is to choose from a particular PDF). This article gives an example of a zero-sum game that has no value. It is due to Sion and Wolfe. Zero-sum games with a finite number of pure strategies are known to have a minimax value (originally proved by John von Neumann) but this is not necessarily the case if the game has an infinite set of strategies. There follows a simple example of a game with no minimax value. The existence of such zero-sum games is interesting because many of the results of game theory become inapplicable if there is no minimax value. The game Players I and II choose numbers and respectively, between 0 and 1. The payoff to player I is That is, after the choices are made, player II pays to player I (so the game is zero-sum). If the pair is interpreted as a point on the unit square, the figure shows the payoff to player I. Player I may adopt a mixed strategy, choosing a number according to a probability density function (pdf) , and similarly player II chooses from a pdf . Player I seeks to maximize the payoff , player II to minimize the payoff, and each player is aware of the other's objective. Game value Sion and Wolfe show that but These are the maximal and minimal expectations of the game's value of player I and II respectively. The and respectively take the supremum and infimum over pdf's on the unit interval (actually Borel probability measures). These represent player I and player II's (mixed) strategies. Thus, player I can assure himself of a payoff of at least 3/7 if he knows player II's strategy, and player II can hold the payoff down to 1/3 if he knows player I's strategy. There is no epsilon equilibrium for sufficiently small , specifically, if . Dasgupta and Maskin assert that the game values are " https://en.wikipedia.org/wiki/Packaging%20gas,"A packaging gas is used to pack sensitive materials such as food into a modified atmosphere environment. The gas used is usually inert, or of a nature that protects the integrity of the packaged goods, inhibiting unwanted chemical reactions such as food spoilage or oxidation. Some may also serve as a propellant for aerosol sprays like cans of whipped cream. For packaging food, the use of various gases is approved by regulatory organisations. Their E numbers are included in the following lists in parentheses. Inert gases These gas types do not cause a chemical change to the substance that they protect. argon (E938), used for canned products helium (E939), used for canned products nitrogen (E941), also propellant carbon dioxide (E290), also propellant Propellant gases Specific kinds of packaging gases are aerosol propellants. These process and assist the ejection of the product from its container. chlorofluorocarbons known as CFC (E940 and E945), now rarely used because of the damage that they do to the ozone layer: dichlorodifluoromethane (E940) chloropentafluoroethane (E945) nitrous oxide (E942), used for aerosol whipped cream canisters (see Nitrous oxide: Aerosol propellant) octafluorocyclobutane (E946) Reactive gases These must be used with caution as they may have adverse effects when exposed to certain chemicals. They will cause oxidisation or contamination to certain types of materials. oxygen (E948), used e.g. for packaging of vegetables hydrogen (E949) Volatile gases Hydrocarbon gases approved for use with food need to be used with extreme caution as they are highly combustible, when combined with oxygen they burn very rapidly and may cause explosions in confined spaces. Special precautions must be taken when transporting these gases. butane (E943a) isobutane (E943b) propane (E944) See also Shielding gas" https://en.wikipedia.org/wiki/Power%2C%20root-power%2C%20and%20field%20quantities,"A power quantity is a power or a quantity directly proportional to power, e.g., energy density, acoustic intensity, and luminous intensity. Energy quantities may also be labelled as power quantities in this context. A root-power quantity is a quantity such as voltage, current, sound pressure, electric field strength, speed, or charge density, the square of which, in linear systems, is proportional to power. The term root-power quantity refers to the square root that relates these quantities to power. The term was introduced in ; it replaces and deprecates the term field quantity. Implications It is essential to know which category a measurement belongs to when using decibels (dB) for comparing the levels of such quantities. A change of one bel in the level corresponds to a 10× change in power, so when comparing power quantities x and y, the difference is defined to be 10×log10(y/x) decibel. With root-power quantities, however the difference is defined as 20×log10(y/x) dB. In the analysis of signals and systems using sinusoids, field quantities and root-power quantities may be complex-valued, as in the propagation constant. ""Root-power quantity"" vs. ""field quantity"" In justifying the deprecation of the term ""field quantity"" and instead using ""root-power quantity"" in the context of levels, ISO 80000 draws attention to the conflicting use of the former term to mean a quantity that depends on the position, which in physics is called a field. Such a field is often called a field quantity in the literature, but is called a field here for clarity. Several types of field (such as the electromagnetic field) meet the definition of a root-power quantity, whereas others (such as the Poynting vector and temperature) do not. Conversely, not every root-power quantity is a field (such as the voltage on a loudspeaker). See also Level (logarithmic quantity) Fresnel reflection field and power equations Sound level, defined for each of several quantities associated with " https://en.wikipedia.org/wiki/Thermal%20conductance%20and%20resistance,"In heat transfer, thermal engineering, and thermodynamics, thermal conductance and thermal resistance are fundamental concepts that describe the ability of materials or systems to conduct heat and the opposition they offer to the heat current. The ability to manipulate these properties allows engineers to control temperature gradient, prevent thermal shock, and maximize the efficiency of thermal systems. Furthermore, these principles find applications in a multitude of fields, including materials science, mechanical engineering, electronics, and energy management. Knowledge of these principles is crucial in various scientific, engineering, and everyday applications, from designing efficient temperature control, thermal insulation, and thermal management in industrial processes to optimizing the performance of electronic devices. Thermal conductance (C) measures the ability of a material or system to conduct heat. It provides insights into the ease with which heat can pass through a particular system. It is measured in units of watts per kelvin (W/K). It is essential in the design of heat exchangers, thermally efficient materials, and various engineering systems where the controlled movement of heat is vital. Conversely, thermal resistance (R) measures the opposition to the heat current in a material or system. It is measured in units of kelvins per watt (K/W) and indicates how much temperature difference (in kelvins) is required to transfer a unit of heat current (in watts) through the material or object. It is essential to optimize the building insulation, evaluate the efficiency of electronic devices, and enhance the performance of heat sinks in various applications. Objects made of insulators like rubber tend to have very high resistance and low conductance, while objects made of conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity. However, the nature of a material is no" https://en.wikipedia.org/wiki/Apotome%20%28mathematics%29,"In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be ""commensurable only in power"" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational. Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another. This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes." https://en.wikipedia.org/wiki/STREAMS,"In computer networking, STREAMS is the native framework in Unix System V for implementing character device drivers, network protocols, and inter-process communication. In this framework, a stream is a chain of coroutines that pass messages between a program and a device driver (or between a pair of programs). STREAMS originated in Version 8 Research Unix, as Streams (not capitalized). STREAMS's design is a modular architecture for implementing full-duplex I/O between kernel and device drivers. Its most frequent uses have been in developing terminal I/O (line discipline) and networking subsystems. In System V Release 4, the entire terminal interface was reimplemented using STREAMS. An important concept in STREAMS is the ability to push drivers custom code modules which can modify the functionality of a network interface or other device together to form a stack. Several of these drivers can be chained together in order. History STREAMS was based on the Streams I/O subsystem introduced in the Eighth Edition Research Unix (V8) by Dennis Ritchie, where it was used for the terminal I/O subsystem and the Internet protocol suite. This version, not yet called STREAMS in capitals, fit the new functionality under the existing device I/O system calls (open, close, read, write, and ioctl), and its application was limited to terminal I/O and protocols providing pipe-like I/O semantics. This I/O system was ported to System V Release 3 by Robert Israel, Gil McGrath, Dave Olander, Her-Daw Che, and Maury Bach as part of a wider framework intended to support a variety of transport protocols, including TCP, ISO Class 4 transport, SNA LU 6.2, and the AT&T NPACK protocol (used in RFS). It was first released with the Network Support Utilities (NSU) package of UNIX System V Release 3. This port added the putmsg, getmsg, and poll system calls, which are nearly equivalent in purpose to the send, recv, and select calls from Berkeley sockets. The putmsg and getmsg system calls were orig" https://en.wikipedia.org/wiki/Ruler,"A ruler, sometimes called a rule, scale or a line gauge, is an instrument used to make length measurements, whereby a user estimates a length by reading from a series of markings called ""rules"" along an edge of the device. Commonly the instrument is rigid and the edge itself is a straightedge (""ruled straightedge""), which additionally allows one to draw straight lines. Some rulers, such as cloth or paper tape measures, are non-rigid. Specialty rulers exist that have flexible edges that retain a chosen shape; these find use in sewing, arts, and crafts. Rulers have been used since ancient times. They are commonly made from metal, wood, fabric, paper, and plastic. They are important tools in the design and construction of buildings. Their ability to quickly and easily measure lengths makes them important in the textile industry and in the retail trade, where lengths of string, fabric, and paper goods can be cut to size. Children learn the basic use of rulers at the elementary school level, and they are often part of a student's school supplies. At the high school level rulers are often used as straightedges for geometric constructions in Euclidean geometry. Rulers are ubiquitous in the engineering and construction industries, often in the form of a tape measure, and are used for making and reading technical drawings. Since much technical work is now done on computer, many software programs implement virtual rulers to help the user estimate virtual distances. Variants Rulers have long been made from different materials and in multiple sizes. Historically they were mainly wooden; but plastics have also been used since they were invented; they can be molded with length markings instead of being scribed. Metal is used for more durable rulers for use in the workshop; sometimes a metal edge is embedded into a wooden desk ruler to preserve the edge when used for straight-line cutting. in length is useful for a ruler to be kept on a desk to help in drawing. Shorter rulers " https://en.wikipedia.org/wiki/Particular%20values%20of%20the%20Riemann%20zeta%20function,"In mathematics, the Riemann zeta function is a function in complex analysis, which is also important in number theory. It is often denoted and is named after the mathematician Bernhard Riemann. When the argument is a real number greater than one, the zeta function satisfies the equation It can therefore provide the sum of various convergent infinite series, such as Explicit or numerically efficient formulae exist for at integer arguments, all of which have real values, including this example. This article lists these formulae, together with tables of values. It also includes derivatives and some series composed of the zeta function at integer arguments. The same equation in above also holds when is a complex number whose real part is greater than one, ensuring that the infinite sum still converges. The zeta function can then be extended to the whole of the complex plane by analytic continuation, except for a simple pole at . The complex derivative exists in this more general region, making the zeta function a meromorphic function. The above equation no longer applies for these extended values of , for which the corresponding summation would diverge. For example, the full zeta function exists at (and is therefore finite there), but the corresponding series would be whose partial sums would grow indefinitely large. The zeta function values listed below include function values at the negative even numbers (, ), for which and which make up the so-called trivial zeros. The Riemann zeta function article includes a colour plot illustrating how the function varies over a continuous rectangular region of the complex plane. The successful characterisation of its non-trivial zeros in the wider plane is important in number theory, because of the Riemann hypothesis. The Riemann zeta function at 0 and 1 At zero, one has At 1 there is a pole, so ζ(1) is not finite but the left and right limits are: Since it is a pole of first order, it has a complex residue Positiv" https://en.wikipedia.org/wiki/Routing%20domain,"In computer networking, a routing domain is a collection of networked systems that operate common routing protocols and are under the control of a single administration. For example, this might be a set of routers under the control of a single organization, some of them operating a corporate network, some others a branch office network, and the rest the data center network. A given autonomous system can contain multiple routing domains, or a set of routing domains can be coordinated without being an Internet-participating autonomous system." https://en.wikipedia.org/wiki/Mathematics%20and%20fiber%20arts,"Ideas from mathematics have been used as inspiration for fiber arts including quilt making, knitting, cross-stitch, crochet, embroidery and weaving. A wide range of mathematical concepts have been used as inspiration including topology, graph theory, number theory and algebra. Some techniques such as counted-thread embroidery are naturally geometrical; other kinds of textile provide a ready means for the colorful physical expression of mathematical concepts. Quilting The IEEE Spectrum has organized a number of competitions on quilt block design, and several books have been published on the subject. Notable quiltmakers include Diana Venters and Elaine Ellison, who have written a book on the subject Mathematical Quilts: No Sewing Required. Examples of mathematical ideas used in the book as the basis of a quilt include the golden rectangle, conic sections, Leonardo da Vinci's Claw, the Koch curve, the Clifford torus, San Gaku, Mascheroni's cardioid, Pythagorean triples, spidrons, and the six trigonometric functions. Knitting and crochet Knitted mathematical objects include the Platonic solids, Klein bottles and Boy's surface. The Lorenz manifold and the hyperbolic plane have been crafted using crochet. Knitted and crocheted tori have also been constructed depicting toroidal embeddings of the complete graph K7 and of the Heawood graph. The crocheting of hyperbolic planes has been popularized by the Institute For Figuring; a book by Daina Taimina on the subject, Crocheting Adventures with Hyperbolic Planes, won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year. Embroidery Embroidery techniques such as counted-thread embroidery including cross-stitch and some canvas work methods such as Bargello make use of the natural pixels of the weave, lending themselves to geometric designs. Weaving Ada Dietz (1882 – 1950) was an American weaver best known for her 1949 monograph Algebraic Expressions in Handwoven Textiles, which defines weaving patterns based on " https://en.wikipedia.org/wiki/Efficiency%20of%20food%20conversion,"The efficiency of conversion of ingested food to unit of body substance (ECI, also termed ""growth efficiency"") is an index measure of food fuel efficiency in animals. The ECI is a rough scale of how much of the food ingested is converted into growth in the animal's mass. It can be used to compare the growth efficiency as measured by the weight gain of different animals from consuming a given quantity of food relative to its size. The ECI effectively represents efficiencies of both digestion (approximate digestibility or AD) and metabolic efficiency, or how well digested food is converted to mass (efficiency of conversion of digested food or ECD). The formula for the efficiency of food fuel is thus: These concepts are also very closely related to the feed conversion ratio (FCR) and feed efficiency." https://en.wikipedia.org/wiki/Reconstruction%20from%20zero%20crossings,"The problem of reconstruction from zero crossings can be stated as: given the zero crossings of a continuous signal, is it possible to reconstruct the signal (to within a constant factor)? Worded differently, what are the conditions under which a signal can be reconstructed from its zero crossings? This problem has two parts. Firstly, proving that there is a unique reconstruction of the signal from the zero crossings, and secondly, how to actually go about reconstructing the signal. Though there have been quite a few attempts, no conclusive solution has yet been found. Ben Logan from Bell Labs wrote an article in 1977 in the Bell System Technical Journal giving some criteria under which unique reconstruction is possible. Though this has been a major step towards the solution, many people are dissatisfied with the type of condition that results from his article. According to Logan, a signal is uniquely reconstructible from its zero crossings if: The signal x(t) and its Hilbert transform xt have no zeros in common with each other. The frequency-domain representation of the signal is at most 1 octave long, in other words, it is bandpass-limited between some frequencies B and 2B. Further reading External links Signal processing" https://en.wikipedia.org/wiki/Radio%20spectrum%20scope,"The radio spectrum scope (also radio panoramic receiver, panoramic adapter, pan receiver, pan adapter, panadapter, panoramic radio spectroscope, panoramoscope, panalyzor and band scope) was invented by Marcel Wallace - and measures and shows the magnitude of an input signal versus frequency within one or more radio bands - e.g. shortwave bands. A spectrum scope is normally a lot cheaper than a spectrum analyzer, because the aim is not high quality frequency resolution - nor high quality signal strength measurements. The spectrum scope use can be to: find radio channels quickly of known and unknown signals when receiving. find radio amateurs activity quickly e.g. with the intent of communicating with them. Modern spectrum scopes, like the Elecraft P3, also plot signal frequencies and amplitudes over time, in a rolling format called a waterfall plot." https://en.wikipedia.org/wiki/Enterprise%20test%20software,"Enterprise test software (ETS) is a type of software that electronics and other manufacturers use to standardize product testing enterprise-wide, rather than simply in the test engineering department. It is designed to integrate and synchronize test systems to other enterprise functions such as research and development (R&D), new product introduction (NPI), manufacturing, and supply chain, overseeing the collaborative test processes between engineers and managers in their respective departments. Details Like most enterprise software subcategories, ETS represents an evolution away from custom-made, in-house software development by original equipment manufacturers (OEM). It typically replaces a cumbersome, unsophisticated, test management infrastructure that manufacturers have to redesign for every new product launch. Some large companies, such as Alcatel, Cisco, and Nortel, develop ETS systems internally to standardize and accelerate their test engineering activities, while others such as Harris Corporation and Freescale Semiconductor choose commercial off-the-shelf ETS options for advantages that include test data management and report generation. This need results from the extensive characterization efforts associated with IC design, characterization, validation, and verification. ETS accelerates design improvements through test system management and version control. ETS supports test system development and can be interconnected with manufacturing execution systems (MES), enterprise resource planning (ERP), and product lifecycle management (PLM) software packages to eliminate double-data entry and enable real-time information sharing throughout all company departments. Enterprise-wide test applications ETS covers five major enterprise-wide test applications. Test and automation—By using ETS in conjunction with virtual instrumentation programming tools, design and test engineers avoid custom software programming unrelated to device characterization, and can ther" https://en.wikipedia.org/wiki/ZX8301,"The ZX8301 is an Uncommitted Logic Array (ULA) integrated circuit designed for the Sinclair QL microcomputer. Also known as the ""Master Chip"", it provides a Video Display Generator, the division of a 15 MHz crystal to provide the 7.5 MHz system clock, ZX8302 register address decoder, DRAM refresh and bus controller. The ZX8301 is IC22 on the QL motherboard. The Sinclair Research business model had always been to work toward a maximum performance to price ratio (as was evidenced by the keyboard mechanisms in the QL and earlier Sinclair models). Unfortunately, this focus on price and performance often resulted in cost cutting in the design and build of Sinclair's machines. One such cost driven decision (failing to use a hardware buffer integrated circuit (IC) between the IC pins and the external RGB monitor connection) caused the ZX8301 to quickly develop a reputation for being fragile and easy to damage, particularly if the monitor plug was inserted or removed while the QL was powered up. Such action resulted in damage to the video circuitry and almost always required replacement of the ZX8301. The ZX8301, when subsequently used in the International Computers Limited (ICL) One Per Desk featured hardware buffering, and the chip proved to be much more reliable in this configuration. See also Sinclair QL One Per Desk List of Sinclair QL clones" https://en.wikipedia.org/wiki/FeaturePak,"The FeaturePak standard defines a small form factor card for I/O expansion of embedded systems and other space-constrained computing applications. The cards are intended to be used for adding a wide range of capabilities, such as A/D, D/A, digital I/O, counter/timers, serial I/O, wired or wireless networking, image processing, GPS, etc. to their host systems. FeaturePak cards plug into edgecard sockets, parallel to the mainboard, similarly to how SO-DIMM memory modules install in laptop or desktop PCs. Socket Interface The FeaturePak socket consists of a 230-pin ""MXM"" connector, which provides all connections to the FeaturePak card, including the host interface, external I/O signals, and power. (Note, however, that the FeaturePak specification's use of the MXM connector differs from that of Nvidia's MXM specification.) Host interface connections include: PCI Express -- up to two PCI Express x1 lanes USB -- up to two USB 1.1 or 2.0 channels Serial—one logic-level UART interface SMBus JTAG PCI Express Reset Several auxiliary signals 3V and 5V power and ground Reserved lines (for future enhancements) The balance of the 230-pin FeaturePak socket is allocated to I/O, in two groups: Primary I/O—50 general purpose I/O lines, of which 34 pairs have enhanced isolation Secondary I/O—50 general purpose I/O lines The FeaturePak socket's MXM connector is claimed capable of 2.5 Gbit/s bandwidth on each pin, thereby supporting high-speed interfaces such as PCI Express, gigabit Ethernet, USB 2.0, among others. Enhanced I/O signal isolation within the Primary I/O group is accomplished by leaving alternate pins on the MXM connector interface unused. FeaturePak cards are powered by 3.3V and use standard 3.3V logic levels. The socket also provides a 5V input option, for cards that require the additional voltage to power auxiliary functions. Other than the provision of extra isolation for 34 signal pairs, there is no defined allocation of the signals within the Primary I/O and " https://en.wikipedia.org/wiki/Die%20shot,"A die shot or die photography is a photo or recording of the layout of an integrated circuit, showings its design with any packaging removed. A die shot can be compared with the cross-section of an (almost) two-dimensional computer chip, on which the design and construction of various tracks and components can be clearly seen. Due to the high complexity of modern computer chips, die-shots are often displayed colourfully, with various parts coloured using special lighting or even manually. Methods A die shot is a picture of a computer chip without its housing. There are two ways to capture such a chip ""naked"" on a photo; by either taking the photo before a chip is packaged or by removing its package. Avoiding the package Taking a photo before the chip ends up in a housing is typically preserved to the chip manufacturer, because the chip is packed fairly quickly in the production process to protect the sensitive very small parts against external influences. However, manufacturers may be reluctant to share die shots to prevent competitors from easily gaining insight into the technological progress and complexity of a chip. Removing the package Removing the housing from a chip is typically a chemical process - a chip is so small and the parts are so microscopic that opening a housing (also named delidding) with tools such as saws, sanders or dremels could damage the chip in such a way that a die shot is no longer or less useful. For example, sulphuric acid can be used to dissolve the plastic housing of a chip. This is not a harmless process - sulphuric acid can cause a lot of health damage to people, animals and the environment. Chips are immersed in a glass jar with sulphuric acid, after which the sulphuric acid is boiled for up to 45 minutes at a temperature of 337 degrees Celsius. Once the plastic housing has decayed, there may be other processes to remove leftover carbon, such as with a hot bath of concentrated nitric acid. After this, the contents of a chip a" https://en.wikipedia.org/wiki/Microbiology,"Microbiology () is the scientific study of microorganisms, those being of unicellular (single-celled), multicellular (consisting of complex cells), or acellular (lacking cells). Microbiology encompasses numerous sub-disciplines including virology, bacteriology, protistology, mycology, immunology, and parasitology. Eukaryotic microorganisms possess membrane-bound organelles and include fungi and protists, whereas prokaryotic organisms—all of which are microorganisms—are conventionally classified as lacking membrane-bound organelles and include Bacteria and Archaea. Microbiologists traditionally relied on culture, staining, and microscopy for the isolation and identification of microorganisms. However, less than 1% of the microorganisms present in common environments can be cultured in isolation using current means. With the emergence of biotechnology, Microbiologists currently rely on molecular biology tools such as DNA sequence-based identification, for example, the 16S rRNA gene sequence used for bacterial identification. Viruses have been variably classified as organisms, as they have been considered either as very simple microorganisms or very complex molecules. Prions, never considered as microorganisms, have been investigated by virologists, however, as the clinical effects traced to them were originally presumed due to chronic viral infections, virologists took a search—discovering ""infectious proteins"". The existence of microorganisms was predicted many centuries before they were first observed, for example by the Jains in India and by Marcus Terentius Varro in ancient Rome. The first recorded microscope observation was of the fruiting bodies of moulds, by Robert Hooke in 1666, but the Jesuit priest Athanasius Kircher was likely the first to see microbes, which he mentioned observing in milk and putrid material in 1658. Antonie van Leeuwenhoek is considered a father of microbiology as he observed and experimented with microscopic organisms in the 1670s, us" https://en.wikipedia.org/wiki/Pulse-density%20modulation,"Pulse-density modulation, or PDM, is a form of modulation used to represent an analog signal with a binary signal. In a PDM signal, specific amplitude values are not encoded into codewords of pulses of different weight as they would be in pulse-code modulation (PCM); rather, the relative density of the pulses corresponds to the analog signal's amplitude. The output of a 1-bit DAC is the same as the PDM encoding of the signal. Description In a pulse-density modulation bitstream, a 1 corresponds to a pulse of positive polarity (+A), and a 0 corresponds to a pulse of negative polarity (−A). Mathematically, this can be represented as where x[n] is the bipolar bitstream (either −A or +A), and a[n] is the corresponding binary bitstream (either 0 or 1). A run consisting of all 1s would correspond to the maximum (positive) amplitude value, all 0s would correspond to the minimum (negative) amplitude value, and alternating 1s and 0s would correspond to a zero amplitude value. The continuous amplitude waveform is recovered by low-pass filtering the bipolar PDM bitstream. Examples A single period of the trigonometric sine function, sampled 100 times and represented as a PDM bitstream, is: 0101011011110111111111111111111111011111101101101010100100100000010000000000000000000001000010010101 Two periods of a higher frequency sine wave would appear as: 0101101111111111111101101010010000000000000100010011011101111111111111011010100100000000000000100101 In pulse-density modulation, a high density of 1s occurs at the peaks of the sine wave, while a low density of 1s occurs at the troughs of the sine wave. Analog-to-digital conversion A PDM bitstream is encoded from an analog signal through the process of a 1-bit delta-sigma modulation. This process uses a one-bit quantizer that produces either a 1 or 0 depending on the amplitude of the analog signal. A 1 or 0 corresponds to a signal that is all the way up or all the way down, respectively. Because in the real world, ana" https://en.wikipedia.org/wiki/Tomographic%20reconstruction,"Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security. This article applies in general to reconstruction methods for all kinds of tomography, but some of the terms and physical descriptions refer directly to the reconstruction of X-ray computed tomography. Introducing formula The projection of an object, resulting from the tomographic measurement process at a given angle , is made up of a set of line integrals (see Fig. 1). A set of many such projections under different angles organized in 2D is called sinogram (see Fig. 3). In X-ray CT, the line integral represents the total attenuation of the beam of x-rays as it travels in a straight line through the object. As mentioned above, the resulting image is a 2D (or 3D) model of the attenuation coefficient. That is, we wish to find the image . The simplest and easiest way to visualise the method of scanning is the system of parallel projection, as used in the first scanners. For this discussion we consider the data to be collected as a series of parallel rays, at position , across a projection at angle . This is repeated for various angles. Attenuation occurs exponentially in tissue: where is the attenuation coefficient as a function of position. Therefore, generally the total attenuation of a ray at position , on the projection at angle , is given by the line integral: Using the coordinate system of Figure 1, the value of onto which the point will be projected" https://en.wikipedia.org/wiki/ADvantage%20Framework,"ADvantage Framework is a model-based systems engineering software platform used for a range of activities including building and operating real-time simulation-based lab test facilities for hardware-in-the-loop simulation purposes. ADvantage includes several desktop applications and run-time services software. The ADvantage run-time services combine a Real-Time Operating System (RTOS) layered on top of commercial computer equipment such as single board computers or standard PCs. The ADvantage tools include a development environment, a run-time environment, a plotting and analysis tool set, a fault insertion control application, and a vehicle network configuration and management tool that runs on a Windows or Linux desktop or laptop PC. The ADvantage user base is composed mainly of aerospace, defense, and naval/marine companies and academic researchers. Recent ADvantage real-time applications involved research and development of power systems applications including microgrid/smartgrid control and All-Electric Ship applications. History With roots in analog computer systems used for real-time applications where digital computers could not meet low-latency computational requirements, Applied Dynamics International moved from proprietary hardware architectures to commercial computing equipment over several decades. The Real-Time Station (RTS) was Applied Dynamics first entry into using Commercial Off The Shelf (COTS) computer hardware. Included with the sale of the RTS was the Applied Dynamics software package called ""SIMsystem"". In 2001, version 7.0 of SIMsystem was released. From 2001 to 2006 Applied Dynamics reworked their software and hardware products to make better use of COTS processors, computer boards, open source software technology and to better abstract software components from the hardware equipment. In 2006, Applied Dynamics announced a beta release of the ""ADvantage Framework"". The ADvantage brand provided an umbrella for the disparate software co" https://en.wikipedia.org/wiki/BioBlitz,"A BioBlitz, also written without capitals as bioblitz, is an intense period of biological surveying in an attempt to record all the living species within a designated area. Groups of scientists, naturalists, and volunteers conduct an intensive field study over a continuous time period (e.g., usually 24 hours). There is a public component to many BioBlitzes, with the goal of getting the public interested in biodiversity. To encourage more public participation, these BioBlitzes are often held in urban parks or nature reserves close to cities. Research into the best practices for a successful BioBlitz has found that collaboration with local natural history museums can improve public participation. As well, BioBlitzes have been shown to be a successful tool in teaching post-secondary students about biodiversity. Features A BioBlitz has different opportunities and benefits than a traditional, scientific field study. Some of these potential benefits include: Enjoyment – Instead of a highly structured and measured field survey, this sort of event has the atmosphere of a festival. The short time frame makes the search more exciting. Local – The concept of biodiversity tends to be associated with coral reefs or tropical rainforests. A BioBlitz offers the chance for people to visit a nearby setting and see that local parks have biodiversity and are important to conserve. Science – These one-day events gather basic taxonomic information on some groups of species. Meet the Scientists – A BioBlitz encourages people to meet working scientists and ask them questions. Identifying rare and unique species/groups – When volunteers and scientists work together, they are able to identify uncommon or special habitats for protection and management and, in some cases, rare species may be uncovered. Documenting species occurrence – BioBlitzes do not provide a complete species inventory for a site, but they provide a species list which makes a basis for a more complete inventory and will of" https://en.wikipedia.org/wiki/Proxy%20server,"In computer networking, a proxy server is a server application that acts as an intermediary between a client requesting a resource and the server providing that resource. It improves privacy, security, and performance in the process. Instead of connecting directly to a server that can fulfill a request for a resource, such as a file or web page, the client directs the request to the proxy server, which evaluates the request and performs the required network transactions. This serves as a method to simplify or control the complexity of the request, or provide additional benefits such as load balancing, privacy, or security. Proxies were devised to add structure and encapsulation to distributed systems. A proxy server thus functions on behalf of the client when requesting service, potentially masking the true origin of the request to the resource server. Types A proxy server may reside on the user's local computer, or at any point between the user's computer and destination servers on the Internet. A proxy server that passes unmodified requests and responses is usually called a gateway or sometimes a tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve data from a wide range of sources (in most cases, anywhere on the Internet). A reverse proxy is usually an internal-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption and caching. Open proxies An open proxy is a forwarding proxy server that is accessible by any Internet user. In 2008, network security expert Gordon Lyon estimated that ""hundreds of thousands"" of open proxies are operated on the Internet. Anonymous proxy: This server reveals its identity as a proxy server but does not disclose the originating IP address of the client. Although this type of server can be discovered easily, it can be beneficial for some users as it hides the originating" https://en.wikipedia.org/wiki/Food%20and%20biological%20process%20engineering,"Food and biological process engineering is a discipline concerned with applying principles of engineering to the fields of food production and distribution and biology. It is a broad field, with workers fulfilling a variety of roles ranging from design of food processing equipment to genetic modification of organisms. In some respects it is a combined field, drawing from the disciplines of food science and biological engineering to improve the earth's food supply. Creating, processing, and storing food to support the world's population requires extensive interdisciplinary knowledge. Notably, there are many biological engineering processes within food engineering to manipulate the multitude of organisms involved in our complex food chain. Food safety in particular requires biological study to understand the microorganisms involved and how they affect humans. However, other aspects of food engineering, such as food storage and processing, also require extensive biological knowledge of both the food and the microorganisms that inhabit it. This food microbiology and biology knowledge becomes biological engineering when systems and processes are created to maintain desirable food properties and microorganisms while providing mechanisms for eliminating the unfavorable or dangerous ones. Concepts Many different concepts are involved in the field of food and biological process engineering. Below are listed several major ones. Food science The science behind food and food production involves studying how food behaves and how it can be improved. Researchers analyze longevity and composition (i.e., ingredients, vitamins, minerals, etc.) of foods, as well as how to ensure food safety. Genetic engineering Modern food and biological process engineering relies heavily on applications of genetic manipulation. By understanding plants and animals on the molecular level, scientists are able to engineer them with specific goals in mind. Among the most notable applications of " https://en.wikipedia.org/wiki/Physical%20media,"Physical media refers to the physical materials that are used to store or transmit information in data communications. These physical media are generally physical objects made of materials such as copper or glass. They can be touched and felt, and have physical properties such as weight and color. For a number of years, copper and glass were the only media used in computer networking. The term physical media can also be used to describe data storage media like records, cassettes, VHS, LaserDiscs, CDs, DVDs, and Blu-rays, especially when compared with modern streaming media or content that has been downloaded from the Internet onto a hard drive or other storage device as files. Types of physical media Copper wire Copper wire is currently the most commonly used type of physical media due to the abundance of copper in the world, as well as its ability to conduct electrical power. Copper is also one of the cheaper metals which makes it more feasible to use. Most copper wires used in data communications today have eight strands of copper, organized in unshielded twisted pairs, or UTP. The wires are twisted around one another because it reduces electrical interference from outside sources. In addition to UTP, some wires use shielded twisted pairs (STP), which reduce electrical interference even further. The way copper wires are twisted around one another also has an effect on data rates. Category 3 cable (Cat3), has three to four twists per foot and can support speeds of 10 Mbit/s. Category 5 cable (Cat5) is newer and has three to four twists per inch, which results in a maximum data rate of 100 Mbit/s. In addition, there are category 5e (Cat5e) cables which can support speeds of up to 1,000 Mbit/s, and more recently, category 6 cables (Cat6), which support data rates of up to 10,000 Mbit/s (i.e., 10 Gbit/s). On average, copper wire costs around $1 per foot. Optical fiber Optical fiber is a thin and flexible piece of fiber made of glass or plastic. Unlike copper w" https://en.wikipedia.org/wiki/PI%20curve,"The PI (or photosynthesis-irradiance) curve is a graphical representation of the empirical relationship between solar irradiance and photosynthesis. A derivation of the Michaelis–Menten curve, it shows the generally positive correlation between light intensity and photosynthetic rate. It is a plot of photosynthetic rate as a function of light intensity (irradiance). Introduction The PI curve can be applied to terrestrial and marine reactions but is most commonly used to explain ocean-dwelling phytoplankton's photosynthetic response to changes in light intensity. Using this tool to approximate biological productivity is important because phytoplankton contribute ~50% of total global carbon fixation and are important suppliers to the marine food web. Within the scientific community, the curve can be referred to as the PI, PE or Light Response Curve. While individual researchers may have their own preferences, all are readily acceptable for use in the literature. Regardless of nomenclature, the photosynthetic rate in question can be described in terms of carbon (C) fixed per unit per time. Since individuals vary in size, it is also useful to normalise C concentration to Chlorophyll a (an important photosynthetic pigment) to account for specific biomass. History As far back as 1905, marine researchers attempted to develop an equation to be used as the standard in establishing the relationship between solar irradiance and photosynthetic production. Several groups had relative success, but in 1976 a comparison study conducted by Alan Jassby and Trevor Platt, researchers at the Bedford Institute of Oceanography in Dartmouth, Nova Scotia, reached a conclusion that solidified the way in which a PI curve is developed. After evaluating the eight most-used equations, Jassby and Platt argued that the PI curve can be best approximated by a hyperbolic tangent function, at least until photoinhibition is reached. Equations There are two simple derivations of the equatio" https://en.wikipedia.org/wiki/Spectral%20correlation%20density,"The spectral correlation density (SCD), sometimes also called the cyclic spectral density or spectral correlation function, is a function that describes the cross-spectral density of all pairs of frequency-shifted versions of a time-series. The spectral correlation density applies only to cyclostationary processes because stationary processes do not exhibit spectral correlation. Spectral correlation has been used both in signal detection and signal classification. The spectral correlation density is closely related to each of the bilinear time-frequency distributions, but is not considered one of Cohen's class of distributions. Definition The cyclic auto-correlation function of a time-series is calculated as follows: where (*) denotes complex conjugation. By the Wiener–Khinchin theorem [questionable, discuss], the spectral correlation density is then: Estimation methods The SCD is estimated in the digital domain with an arbitrary resolution in frequency and time. There are several estimation methods currently used in practice to efficiently estimate the spectral correlation for use in real-time analysis of signals due to its high computational complexity. Some of the more popular ones are the FFT Accumulation Method (FAM) and the Strip-Spectral Correlation Algorithm. A fast-spectral-correlation (FSC) algorithm has recently been introduced. FFT accumulation method (FAM) This section describes the steps for one to compute the SCD on computers. If with MATLAB or the NumPy library in Python, the steps are rather simple to implement. The FFT accumulation method (FAM) is a digital approach to calculating the SCD. Its input is a large block of IQ samples, and the output is a complex-valued image, the SCD. Let the signal, or block of IQ samples, be a complex valued tensor, or multidimensional array, of shape , where each element is an IQ sample. The first step of the FAM is to break into a matrix of frames of size with overlap. where is the separation betwee" https://en.wikipedia.org/wiki/Equidimensionality,"In mathematics, especially in topology, equidimensionality is a property of a space that the local dimension is the same everywhere. Definition (topology) A topological space X is said to be equidimensional if for all points p in X, the dimension at p, that is dim p(X), is constant. The Euclidean space is an example of an equidimensional space. The disjoint union of two spaces X and Y (as topological spaces) of different dimension is an example of a non-equidimensional space. Definition (algebraic geometry) A scheme S is said to be equidimensional if every irreducible component has the same Krull dimension. For example, the affine scheme Spec k[x,y,z]/(xy,xz), which intuitively looks like a line intersecting a plane, is not equidimensional. Cohen–Macaulay ring An affine algebraic variety whose coordinate ring is a Cohen–Macaulay ring is equidimensional." https://en.wikipedia.org/wiki/List%20of%20search%20appliance%20vendors,"A search appliance is a type of computer which is attached to a corporate network for the purpose of indexing the content shared across that network in a way that is similar to a web search engine. It may be made accessible through a public web interface or restricted to users of that network. A search appliance is usually made up of: a gathering component, a standardizing component, a data storage area, a search component, a user interface component, and a management interface component. Vendors of search appliances Fabasoft Google InfoLibrarian Search Appliance™ Maxxcat Searchdaimon Thunderstone Former/defunct vendors of search appliances Black Tulip Systems Google Search Appliance Index Engines Munax Perfect Search Appliance" https://en.wikipedia.org/wiki/Eightfold%20way%20%28physics%29,"In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Working alone, both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman proposed the idea in 1961. The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism. Background By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as atoms and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and hypothesized neutrino. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a ""strong interaction"" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force but those that are, are dubbed ""hadrons"", which are now further classified as mesons (middle mass) and baryons (heavy weight). But the discovery of the (neutral) kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952. In 1953, M. Gell Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as ""strangeness"" during their attempts to understand the growing collection of known particles. The trend of discovering new mesons and baryons would continue through the 1950s as the number of known ""elementary"" particl" https://en.wikipedia.org/wiki/Biological%20interaction,"In ecology, a biological interaction is the effect that a pair of organisms living together in a community have on each other. They can be either of the same species (intraspecific interactions), or of different species (interspecific interactions). These effects may be short-term, or long-term, both often strongly influence the adaptation and evolution of the species involved. Biological interactions range from mutualism, beneficial to both partners, to competition, harmful to both partners. Interactions can be direct when physical contact is established or indirect, through intermediaries such as shared resources, territories, ecological services, metabolic waste, toxins or growth inhibitors. This type of relationship can be shown by net effect based on individual effects on both organisms arising out of relationship. Several recent studies have suggested non-trophic species interactions such as habitat modification and mutualisms can be important determinants of food web structures. However, it remains unclear whether these findings generalize across ecosystems, and whether non-trophic interactions affect food webs randomly, or affect specific trophic levels or functional groups. History Although biological interactions, more or less individually, were studied earlier, Edward Haskell (1949) gave an integrative approach to the thematic, proposing a classification of ""co-actions"", later adopted by biologists as ""interactions"". Close and long-term interactions are described as symbiosis; symbioses that are mutually beneficial are called mutualistic. The term symbiosis was subject to a century-long debate about whether it should specifically denote mutualism, as in lichens or in parasites that benefit themselves. This debate created two different classifications for biotic interactions, one based on the time (long-term and short-term interactions), and other based on the magnitud of interaction force (competition/mutualism) or effect of individual fitness, accordi" https://en.wikipedia.org/wiki/Software%20development%20process,"In software engineering, a software development process is a process of planning and managing software development. It typically involves dividing software development work into smaller, parallel, or sequential steps or sub-processes to improve design and/or product management. It is also known as a software development life cycle (SDLC). The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming. A life-cycle ""model"" is sometimes considered a more general term for a category of methodologies and a software development ""process"" is a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle. History The software development methodology (also known as SDM) framework didn't emerge until the 1960s. According to Elliott (2004), the systems development life cycle (SDLC) can be considered to be the oldest formalized methodology framework for building information systems. The main idea of the SDLC has been ""to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle––from the inception of the idea to delivery of the final system––to be carried out rigidly and sequentially"" within the context of the framework being applied. The main target of this methodology framework in the 1960s was ""to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy " https://en.wikipedia.org/wiki/List%20of%20synchrotron%20radiation%20facilities,"This is a table of synchrotrons and storage rings used as synchrotron radiation sources, and free electron lasers." https://en.wikipedia.org/wiki/Alzheimer%27s%20Disease%20Neuroimaging%20Initiative,"Alzheimer's Disease Neuroimaging Initiative (ADNI) is a multisite study that aims to improve clinical trials for the prevention and treatment of Alzheimer's disease (AD). This cooperative study combines expertise and funding from the private and public sector to study subjects with AD, as well as those who may develop AD and controls with no signs of cognitive impairment. Researchers at 63 sites in the US and Canada track the progression of AD in the human brain with neuroimaging, biochemical, and genetic biological markers. This knowledge helps to find better clinical trials for the prevention and treatment of AD. ADNI has made a global impact, firstly by developing a set of standardized protocols to allow the comparison of results from multiple centers, and secondly by its data-sharing policy which makes available all at the data without embargo to qualified researchers worldwide. To date, over 1000 scientific publications have used ADNI data. A number of other initiatives related to AD and other diseases have been designed and implemented using ADNI as a model. ADNI has been running since 2004 and is currently funded until 2021. Primary goals Detect the earliest signs of AD and to track the disease using biomarkers. validate, standardize, and optimize biomarkers for clinical AD trials. to make all data and samples available for sharing with clinical trial designers and scientists worldwide. History and funding The idea of a collaboration between public institutions and private pharmaceutical companies to fund a large biomarker project to study AD and to speed up progress toward effective treatments for the disease was conceived at the beginning of the millennium by Neil S. Buckholz at the National Institute on Aging (NIA) and Dr. William Potter, at Eli Lilly and Company. The Alzheimer's Disease Neuroimaging Initiative (ADNI) began in 2004 under the leadership of Dr. Michael W. Weiner, funded as a private – public partnership with $27 million contributed b" https://en.wikipedia.org/wiki/Classical%20probability%20density,"The classical probability density is the probability density function that represents the likelihood of finding a particle in the vicinity of a certain location subject to a potential energy in a classical mechanical system. These probability densities are helpful in gaining insight into the correspondence principle and making connections between the quantum system under study and the classical limit. Mathematical background Consider the example of a simple harmonic oscillator initially at rest with amplitude . Suppose that this system was placed inside a light-tight container such that one could only view it using a camera which can only take a snapshot of what's happening inside. Each snapshot has some probability of seeing the oscillator at any possible position along its trajectory. The classical probability density encapsulates which positions are more likely, which are less likely, the average position of the system, and so on. To derive this function, consider the fact that the positions where the oscillator is most likely to be found are those positions at which the oscillator spends most of its time. Indeed, the probability of being at a given -value is proportional to the time spent in the vicinity of that -value. If the oscillator spends an infinitesimal amount of time in the vicinity of a given -value, then the probability of being in that vicinity will be Since the force acting on the oscillator is conservative and the motion occurs over a finite domain, the motion will be cyclic with some period which will be denoted . Since the probability of the oscillator being at any possible position between the minimum possible -value and the maximum possible -value must sum to 1, the normalization is used, where is the normalization constant. Since the oscillating mass covers this range of positions in half its period (a full period goes from to then back to ) the integral over is equal to , which sets to be . Using the chain rule, can be put in te" https://en.wikipedia.org/wiki/Fast%20Ethernet,"In computer networking, Fast Ethernet physical layers carry traffic at the nominal rate of 100 Mbit/s. The prior Ethernet speed was 10 Mbit/s. Of the Fast Ethernet physical layers, 100BASE-TX is by far the most common. Fast Ethernet was introduced in 1995 as the IEEE 802.3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for devices supporting both standards. Nomenclature The 100 in the media type designation refers to the transmission speed of 100 Mbit/s, while the BASE refers to baseband signaling. The letter following the dash (T or F) refers to the physical medium that carries the signal (twisted pair or fiber, respectively), while the last character (X, 4, etc.) refers to the line code method used. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX and TX variants. General design Fast Ethernet is an extension of the 10-megabit Ethernet standard. It runs on twisted pair or optical fiber cable in a star wired bus topology, similar to the IEEE standard 802.3i called 10BASE-T, itself an evolution of 10BASE5 (802.3) and 10BASE2 (802.3a). Fast Ethernet devices are generally backward compatible with existing 10BASE-T systems, enabling plug-and-play upgrades from 10BASE-T. Most switches and other networking devices with ports capable of Fast Ethernet can perform autonegotiation, sensing a piece of 10BASE-T equipment and setting the port to 10BASE-T half duplex if the 10BASE-T equipment cannot perform auto negotiation itself. The standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in practice, all modern networks use Ethernet switches and operate in full-duplex mode, even as legacy devices that use half duplex still exist. A Fast Ethernet adapter can be logically divided into a media access controller (MAC), which deals with the higher-level issues of medium availability, a" https://en.wikipedia.org/wiki/Classification%20of%20manifolds,"In mathematics, specifically geometry and topology, the classification of manifolds is a basic question, about which much is known, and many open questions remain. Main themes Overview Low-dimensional manifolds are classified by geometric structure; high-dimensional manifolds are classified algebraically, by surgery theory. ""Low dimensions"" means dimensions up to 4; ""high dimensions"" means 5 or more dimensions. The case of dimension 4 is somehow a boundary case, as it manifests ""low dimensional"" behaviour smoothly (but not topologically); see discussion of ""low"" versus ""high"" dimension. Different categories of manifolds yield different classifications; these are related by the notion of ""structure"", and more general categories have neater theories. Positive curvature is constrained, negative curvature is generic. The abstract classification of high-dimensional manifolds is ineffective: given two manifolds (presented as CW complexes, for instance), there is no algorithm to determine if they are isomorphic. Different categories and additional structure Formally, classifying manifolds is classifying objects up to isomorphism. There are many different notions of ""manifold"", and corresponding notions of ""map between manifolds"", each of which yields a different category and a different classification question. These categories are related by forgetful functors: for instance, a differentiable manifold is also a topological manifold, and a differentiable map is also continuous, so there is a functor . These functors are in general neither one-to-one nor onto; these failures are generally referred to in terms of ""structure"", as follows. A topological manifold that is in the image of is said to ""admit a differentiable structure"", and the fiber over a given topological manifold is ""the different differentiable structures on the given topological manifold"". Thus given two categories, the two natural questions are: Which manifolds of a given type admit an additiona" https://en.wikipedia.org/wiki/Fault%20tolerance,"Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation. A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails. The term is most commonly used to describe computer systems designed to continue more or less fully operational with, perhaps, a reduction in throughput or an increase in response time in the event of some partial failure. That is, the system as a whole is not stopped due to problems either in the hardware or the software. An example in another field is a motor vehicle designed so it will continue to be drivable if one of the tires is punctured, or a structure that is able to retain its integrity in the presence of damage due to causes such as fatigue, corrosion, manufacturing flaws, or impact. Within the scope of an individual system, fault tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication. In any case, if the consequence of a system failure is so catastrophic, the system must be able to use reversion to fall back to a safe mode. This is similar to roll-back r" https://en.wikipedia.org/wiki/Heat%20generation%20in%20integrated%20circuits,"The heat dissipation in integrated circuits problem has gained an increasing interest in recent years due to the miniaturization of semiconductor devices. The temperature increase becomes relevant for cases of relatively small-cross-sections wires, because such temperature increase may affect the normal behavior of semiconductor devices. Joule heating Joule heating is a predominant heat mechanism for heat generation in integrated circuits and is an undesired effect. Propagation The governing equation of the physics of the problem to be analyzed is the heat diffusion equation. It relates the flux of heat in space, its variation in time and the generation of power. Where is the thermal conductivity, is the density of the medium, is the specific heat the thermal diffusivity and is the rate of heat generation per unit volume. Heat diffuses from the source following equation ([eq:diffusion]) and solution in a homogeneous medium of ([eq:diffusion]) has a Gaussian distribution. See also Thermal simulations for integrated circuits Thermal design power Thermal management in electronics" https://en.wikipedia.org/wiki/Developmental%20bioelectricity,"Developmental bioelectricity is the regulation of cell, tissue, and organ-level patterning and behavior by electrical signals during the development of embryonic animals and plants. The charge carrier in developmental bioelectricity is the ion (a charged atom) rather than the electron, and an electric current and field is generated whenever a net ion flux occurs. Cells and tissues of all types use flows of ions to communicate electrically. Endogenous electric currents and fields, ion fluxes, and differences in resting potential across tissues comprise a signalling system. It functions along with biochemical factors, transcriptional networks, and other physical forces to regulate cell behaviour and large-scale patterning in processes such as embryogenesis, regeneration, and cancer suppression. Overview Developmental bioelectricity is a sub-discipline of biology, related to, but distinct from, neurophysiology and bioelectromagnetics. Developmental bioelectricity refers to the endogenous ion fluxes, transmembrane and transepithelial voltage gradients, and electric currents and fields produced and sustained in living cells and tissues. This electrical activity is often used during embryogenesis, regeneration, and cancer suppression—it is one layer of the complex field of signals that impinge upon all cells in vivo and regulate their interactions during pattern formation and maintenance. This is distinct from neural bioelectricity (classically termed electrophysiology), which refers to the rapid and transient spiking in well-recognized excitable cells like neurons and myocytes (muscle cells); and from bioelectromagnetics, which refers to the effects of applied electromagnetic radiation, and endogenous electromagnetics such as biophoton emission and magnetite. The inside/outside discontinuity at the cell surface enabled by a lipid bilayer membrane (capacitor) is at the core of bioelectricity. The plasma membrane was an indispensable structure for the origin and evolut" https://en.wikipedia.org/wiki/List%20of%20physics%20mnemonics,"This is a categorized list of physics mnemonics. Mechanics Work: formula ""Lots of Work makes me Mad!"": Work = Mad: M=Mass a=acceleration d=distance Thermodynamics Ideal gas law ""Pure Virgins Never Really Tire"": PV=nRT Gibbs's free energy formula ""Good Honey Tastes Sweet"": (delta)G = H - T(delta)S. Electrodynamics Ohm's Law ""Virgins Are Rare"": Volts = Amps x Resistance Relation between Resistance and Resistivity REPLAY Resistance = ρ (Length/Area) Inductive and Capacitive circuits Once upon a time, the symbol E (for electromotive force) was used to designate voltages. Then, every student learned the phrase ELI the ICE man as a reminder that: For an inductive (L) circuit, the EMF (E) is ahead of the current (I) While for a capactive circuit (C), the current (I) is ahead of the EMF (E). And then they all lived happily ever after. Open and Short circuits ""There are zero COVS grazing in the field!"" This is a mnemonic to remember the useful fact that: The Current through an Open circuit is always zero The Voltage across a Short circuit is always zero Order of rainbow colors ROYGBIV (in reverse VIBGYOR) is commonly used to remember the order of colors in the visible light spectrum, as seen in a rainbow. Richard of York gave battle in vain"" (red, orange, yellow, green, blue, indigo, violet). Additionally, the fictitious name Roy G. Biv can be used as well. (red, orange, yellow, green, blue, indigo, violet). Speed of light The phrase ""We guarantee certainty, clearly referring to this light mnemonic."" represents the speed of light in meters per second through the number of letters in each word: 299,792,458. Electromagnetic spectrum In the order of increasing frequency or decreasing wavelength of electromagnetic waves; Road Men Invented Very Unique Xtra Gums Ronald McDonald Invented Very Unusual & eXcellent Gherkins. Remember My Instructions Visible Under X-Ray Glasses Raging (or Red) Martians Invaded Venus Using X-ray Guns.Rahul's Mother " https://en.wikipedia.org/wiki/Equivalent%20rectangular%20bandwidth,"The equivalent rectangular bandwidth or ERB is a measure used in psychoacoustics, which gives an approximation to the bandwidths of the filters in human hearing, using the unrealistic but convenient simplification of modeling the filters as rectangular band-pass filters, or band-stop filters, like in tailor-made notched music training (TMNMT). Approximations For moderate sound levels and young listeners, the bandwidth of human auditory filters can be approximated by the polynomial equation: where f is the center frequency of the filter in kHz and ERB(f) is the bandwidth of the filter in Hz. The approximation is based on the results of a number of published simultaneous masking experiments and is valid from 0.1 to 6.5 kHz. The above approximation was given in 1983 by Moore and Glasberg, who in 1990 published another (linear) approximation: where f is in kHz and ERB(f) is in Hz. The approximation is applicable at moderate sound levels and for values of f between 0.1 and 10 kHz. ERB-rate scale The ERB-rate scale, or ERB-number scale, can be defined as a function ERBS(f) which returns the number of equivalent rectangular bandwidths below the given frequency f. The units of the ERB-number scale are known ERBs, or as Cams, following a suggestion by Hartmann. The scale can be constructed by solving the following differential system of equations: The solution for ERBS(f) is the integral of the reciprocal of ERB(f) with the constant of integration set in such a way that ERBS(0) = 0. Using the second order polynomial approximation () for ERB(f) yields: where f is in kHz. The VOICEBOX speech processing toolbox for MATLAB implements the conversion and its inverse as: where f is in Hz. Using the linear approximation () for ERB(f) yields: where f is in Hz. See also Critical bands Bark scale" https://en.wikipedia.org/wiki/List%20of%20incomplete%20proofs,"This page lists notable examples of incomplete published mathematical proofs. Most of these were accepted as correct for several years but later discovered to contain gaps. There are both examples where a complete proof was later found and where the alleged result turned out to be false. Results later proved rigorously Euclid's Elements. Euclid's proofs are essentially correct, but strictly speaking sometimes contain gaps because he tacitly uses some unstated assumptions, such as the existence of intersection points. In 1899 David Hilbert gave a complete set of (second order) axioms for Euclidean geometry, called Hilbert's axioms, and between 1926 and 1959 Tarski gave some complete sets of first order axioms, called Tarski's axioms. Isoperimetric inequality. For three dimensions it states that the shape enclosing the maximum volume for its surface area is the sphere. It was formulated by Archimedes but not proved rigorously until the 19th century, by Hermann Schwarz. Infinitesimals. In the 18th century there was widespread use of infinitesimals in calculus, though these were not really well defined. Calculus was put on firm foundations in the 19th century, and Robinson put infinitesimals in a rigorous basis with the introduction of nonstandard analysis in the 20th century. Fundamental theorem of algebra (see History). Many incomplete or incorrect attempts were made at proving this theorem in the 18th century, including by d'Alembert (1746), Euler (1749), de Foncenex (1759), Lagrange (1772), Laplace (1795), Wood (1798), and Gauss (1799). The first rigorous proof was published by Argand in 1806. Dirichlet's theorem on arithmetic progressions. In 1808 Legendre published an attempt at a proof of Dirichlet's theorem, but as Dupré pointed out in 1859 one of the lemmas used by Legendre is false. Dirichlet gave a complete proof in 1837. The proofs of the Kronecker–Weber theorem by Kronecker (1853) and Weber (1886) both had gaps. The first complete proof was given " https://en.wikipedia.org/wiki/Exploitative%20interactions,"Exploitative interactions, also known as enemy–victim interactions, is a part of consumer–resource interactions where one organism (the enemy) is the consumer of another organism (the victim), typically in a harmful manner. Some examples of this include predator–prey interactions, host–pathogen interactions, and brood parasitism. In exploitative interactions, the enemy and the victim may often coevolve with each other. How exactly they coevolve depends on many factors, such as population density. One evolutionary consequence of exploitative interactions is antagonistic coevolution. This can occur because of resistance, where the victim attempts to decrease the number of successful attacks by the enemy, which encourages the enemy to evolve in response, thus resulting in a coevolutionary arms race. On the other hand, toleration, where the victim attempts to decrease the effect on fitness that successful enemy attacks have, may also evolve. Exploitative interactions can have significant biological effects. For example, exploitative interactions between a predator and prey can result in the extinction of the victim (the prey, in this case), as the predator, by definition, kills the prey, and thus reduces its population. Another effect of these interactions is in the coevolutionary ""hot"" and ""cold spots"" put forth by geographic mosaic theory. In this case, coevolution caused by resistance would create ""hot spots"" of coevolutionary activity in an otherwise uniform environment, whereas ""cold spots"" would be created by the evolution of tolerance, which generally does not create a coevolutionary arms race. See also Biological interactions Coevolution Consumer–resource interactions Host-pathogen interaction Parasitism Predation" https://en.wikipedia.org/wiki/Ultra-large-scale%20systems,"Ultra-large-scale system (ULSS) is a term used in fields including Computer Science, Software Engineering and Systems Engineering to refer to software intensive systems with unprecedented amounts of hardware, lines of source code, numbers of users, and volumes of data. The scale of these systems gives rise to many problems: they will be developed and used by many stakeholders across multiple organizations, often with conflicting purposes and needs; they will be constructed from heterogeneous parts with complex dependencies and emergent properties; they will be continuously evolving; and software, hardware and human failures will be the norm, not the exception. The term 'ultra-large-scale system' was introduced by Northrop and others to describe challenges facing the United States Department of Defense. The term has subsequently been used to discuss challenges in many areas, including the computerization of financial markets. The term ""ultra-large-scale system"" (ULSS) is sometimes used interchangeably with the term ""large-scale complex IT system"" (LSCITS). These two terms were introduced at similar times to describe similar problems, the former being coined in the United States and the latter in the United Kingdom. Background The term ultra-large-scale system was introduced in a 2006 report from the Software Engineering Institute at Carnegie Mellon University authored by Linda Northrop and colleagues. The report explained that software intensive systems are reaching unprecedented scales (by measures including lines of code; numbers of users and stakeholders; purposes the system is put to; amounts of data stored, accessed, manipulated, and refined; numbers of connections and interdependencies among components; and numbers of hardware elements). When systems become ultra-large-scale, traditional approaches to engineering and management will no longer be adequate. The report argues that the problem is no longer of engineering systems or system of systems, but of engine" https://en.wikipedia.org/wiki/List%20of%20unsolved%20problems%20in%20physics,"The following is a list of notable unsolved problems grouped into broad areas of physics. Some of the major unsolved problems in physics are theoretical, meaning that existing theories seem incapable of explaining a certain observed phenomenon or experimental result. The others are experimental, meaning that there is a difficulty in creating an experiment to test a proposed theory or investigate a phenomenon in greater detail. There are still some questions beyond the Standard Model of physics, such as the strong CP problem, neutrino mass, matter–antimatter asymmetry, and the nature of dark matter and dark energy. Another problem lies within the mathematical framework of the Standard Model itself—the Standard Model is inconsistent with that of general relativity, to the point that one or both theories break down under certain conditions (for example within known spacetime singularities like the Big Bang and the centres of black holes beyond the event horizon). General physics Theory of everything: Is there a singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all physical aspects of the universe? Dimensionless physical constants: At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement. What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all? Quantum gravity Quantum gravity: Can quantum mechanics and general relativity be realized as a fully consistent theory (perhaps as a quantum field theory)? Is spacetime fundamentally continuous or discrete? Would a consistent theory involve a force mediated by a hypothetical graviton, or be a product of a discrete structure of spacetime itself (as in loop quantum gravity)? Are there deviations from the predictions of general relativity at very s" https://en.wikipedia.org/wiki/RSCS,"Remote Spooling Communications Subsystem or RSCS is a subsystem (""virtual machine"" in VM terminology) of IBM's VM/370 operating system which accepts files transmitted to it from local or remote system and users and transmits them to destination local or remote users and systems. RSCS also transmits commands and messages among users and systems. RSCS is the software that powered the world’s largest network (or network of networks) prior to the Internet and directly influenced both internet development and user acceptance of networking between independently managed organizations. RSCS was developed by Edson Hendricks and T.C. Hartmann. Both as an IBM product and as an IBM internal network, it later became known as VNET. The network interfaces continued to be called the RSCS compatible protocols and were used to interconnect with IBM systems other than VM systems (typically MVS) and non-IBM computers. The history of this program, and its influence on IBM and the IBM user community, is described in contemporaneous accounts and interviews by Melinda Varian. Technical goals and innovations are described by Creasy and by Hendricks and Hartmann in seminal papers. Among academic users, the same software was employed by BITNET and related networks worldwide. Background RSCS arose because people throughout IBM recognized a need to exchange files. Hendricks’s solution was CPREMOTE, which he completed by mid-1969. CPREMOTE was the first example of a “service virtual machine” and was motivated partly by the desire to prove the usefulness of that concept. In 1971, Norman L. Rasmussen, Manager of IBM’s Cambridge Scientific Center (CSC), asked Hendricks to find a way for the CSC machine to communicate with machines at IBM’s other Scientific Centers. CPREMOTE had taught Hendricks so much about how a communications facility would be used and what function was needed in such a facility, that he decided to discard it and begin again with a new design. After additional iterat" https://en.wikipedia.org/wiki/Multi-core%20processor,"A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores (for example, dual-core or quad-core), each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such as add, move data, and branch) but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP) or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core. A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading. Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to h" https://en.wikipedia.org/wiki/System%20in%20a%20package,"A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on a substrate. They are internally connected by fine wires that are bonded to the package. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die. Technology SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging, unlike less dense multi-chip modules, which place dies horizontally on a carrier. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. SiPs can contain several chips—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any d" https://en.wikipedia.org/wiki/Grey%20box%20model,"In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network. These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use. Model form The general case is a non-linear model with a partial theoretical structure and some unknown parts derived from data. Models with unlike theoretical structures need to be evaluated individually, possibly using simulated annealing or genetic algorithms. Within a particular model structure, parameters or variable parameter relations may need to be found. For a particular structure it is arbitrarily assumed that the data consists of sets of feed vectors f, product vectors p, and operating condition vectors c. Typically c will contain values extracted from f, as well as other values. In many cases a model can be converted to a function of the form: m(f,p,q) where the vector function m gives the errors between the data p, and the model predictions. The vector q gives some variable parameters that are the model's unknown parts. The parameters q vary with the operating conditions c in a manner to be determined. This relation can be specified as q = Ac where A is a matrix of unknown coefficients, and c as in linear regression includes a constant term and possibly transformed values of the original operating conditions to obtain non-linear relations between the original operating condition" https://en.wikipedia.org/wiki/Penrose%20graphical%20notation,"In mathematics and physics, Penrose graphical notation or tensor diagram notation is a (usually handwritten) visual depiction of multilinear functions or tensors proposed by Roger Penrose in 1971. A diagram in the notation consists of several shapes linked together by lines. The notation widely appears in modern quantum theory, particularly in matrix product states and quantum circuits. In particular, Categorical quantum mechanics which includes ZX-calculus is a fully comprehensive reformulation of quantum theory in terms of Penrose diagrams, and is now widely used in quantum industry. The notation has been studied extensively by Predrag Cvitanović, who used it, along with Feynman's diagrams and other related notations in developing ""birdtracks"", a group-theoretical diagram to classify the classical Lie groups. Penrose's notation has also been generalized using representation theory to spin networks in physics, and with the presence of matrix groups to trace diagrams in linear algebra. Interpretations Multilinear algebra In the language of multilinear algebra, each shape represents a multilinear function. The lines attached to shapes represent the inputs or outputs of a function, and attaching shapes together in some way is essentially the composition of functions. Tensors In the language of tensor algebra, a particular tensor is associated with a particular shape with many lines projecting upwards and downwards, corresponding to abstract upper and lower indices of tensors respectively. Connecting lines between two shapes corresponds to contraction of indices. One advantage of this notation is that one does not have to invent new letters for new indices. This notation is also explicitly basis-independent. Matrices Each shape represents a matrix, and tensor multiplication is done horizontally, and matrix multiplication is done vertically. Representation of special tensors Metric tensor The metric tensor is represented by a U-shaped loop or an upside-" https://en.wikipedia.org/wiki/5%20nm%20process,"In semiconductor manufacturing, the International Roadmap for Devices and Systems defines the 5 nm process as the MOSFET technology node following the 7 nm node. In 2020, Samsung and TSMC entered volume production of 5 nm chips, manufactured for companies including Apple, Marvell, Huawei and Qualcomm. The term ""5 nm"" has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors being 5 nanometers in size. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a ""5 nm node is expected to have a contacted gate pitch of 51 nanometers and a tightest metal pitch of 30 nanometers"". However, in real world commercial practice, ""5 nm"" is used primarily as a marketing term by individual microchip manufacturers to refer to a new, improved generation of silicon semiconductor chips in terms of increased transistor density (i.e. a higher degree of miniaturization), increased speed and reduced power consumption compared to the previous 7 nm process. History Background Quantum tunnelling effects through the gate oxide layer on 7 nm and 5 nm transistors became increasingly difficult to manage using existing semiconductor processes. Single-transistor devices below 7 nm were first demonstrated by researchers in the early 2000s. In 2002, an IBM research team including Bruce Doris, Omer Dokumaci, Meikei Ieong and Anda Mocuta fabricated a 6-nanometre silicon-on-insulator (SOI) MOSFET. In 2003, a Japanese research team at NEC, led by Hitoshi Wakabayashi and Shigeharu Yamagami, fabricated the first 5 nm MOSFET. In 2015, IMEC and Cadence had fabricated 5 nm test chips. The fabricated test chips are not fully functional devices but rather are to evaluate patterning of interconnect layers. In 2015, Intel described a lateral nanowire (or gate-all-around) FET concept for the 5 nm node. In 2017, IBM revealed that it had " https://en.wikipedia.org/wiki/Somos%27%20quadratic%20recurrence%20constant,"In mathematics, Somos' quadratic recurrence constant, named after Michael Somos, is the number This can be easily re-written into the far more quickly converging product representation which can then be compactly represented in infinite product form by: The constant σ arises when studying the asymptotic behaviour of the sequence with first few terms 1, 1, 2, 12, 576, 1658880, ... . This sequence can be shown to have asymptotic behaviour as follows: Guillera and Sondow give a representation in terms of the derivative of the Lerch transcendent: where ln is the natural logarithm and (z, s, q) is the Lerch transcendent. Finally, . Notes" https://en.wikipedia.org/wiki/In-circuit%20emulation,"In-circuit emulation (ICE) is the use of a hardware device or in-circuit emulator used to debug the software of an embedded system. It operates by using a processor with the additional ability to support debugging operations, as well as to carry out the main function of the system. Particularly for older systems, with limited processors, this usually involved replacing the processor temporarily with a hardware emulator: a more powerful although more expensive version. It was historically in the form of bond-out processor which has many internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor. More recently the term also covers JTAG-based hardware debuggers which provide equivalent access using on-chip debugging hardware with standard production chips. Using standard chips instead of custom bond-out versions makes the technology ubiquitous and low cost, and eliminates most differences between the development and runtime environments. In this common case, the in-circuit emulator term is a misnomer, sometimes confusingly so, because emulation is no longer involved. Embedded systems present special problems for programmers because they usually lack keyboards, monitors, disk drives and other user interfaces that are present on computers. These shortcomings make in-circuit software debugging tools essential for many common development tasks. Function An in-circuit emulator (ICE) provides a window into the embedded system. The programmer uses the emulator to load programs into the embedded system, run them, step through them slowly, and view and change data used by the system's software. An emulator gets its name because it emulates (imitates) the central processing unit (CPU) of the embedded system's computer. Traditionally it had a plug that inserts into the socket where the CPU integrated circuit chip would normally be placed. Most modern systems use the target system's CPU directly, with special " https://en.wikipedia.org/wiki/Order%20tracking%20%28signal%20processing%29,"In rotordynamics, order tracking is a family of signal processing tools aimed at transforming a measured signal from time domain to angular (or order) domain. These techniques are applied to asynchronously sampled signals (i.e. with a constant sample rate in Hertz) to obtain the same signal sampled at constant angular increments of a reference shaft. In some cases the outcome of the Order Tracking is directly the Fourier transform of such angular domain signal, whose frequency counterpart is defined as ""order"". Each order represents a fraction of the angular velocity of the reference shaft. Order tracking is based on a velocity measurement, generally obtained by means of a tachometer or encoder, needed to estimate the instantaneous velocity and/or the angular position of the shaft. Three main families of computed order tracking techniques have been developed in the past: Computed Order Tracking (COT), Vold-Kalman Filter (VKF) and Order Tracking Transforms. Order tracking refers to a signal processing technique used to extract the periodic content of a signal and track its frequency variations over time. This technique is often used in vibration analysis and monitoring of rotating machinery, such as engines, turbines, and pumps. In order to track the order of a signal, the signal is first transformed into the frequency domain using techniques such as the Fourier transform. The resulting frequency spectrum shows the frequency content of the signal. From the frequency spectrum, it is possible to identify the dominant frequency components, which correspond to the various orders of the rotating machinery. Once the orders are identified, a tracking algorithm is used to track the frequency variations of each order over time. This is done by comparing the frequency content of the signal at different time instants and identifying the shifts in the frequency components. Computed order tracking Computed order tracking is a resampling technique based on interpolation. T" https://en.wikipedia.org/wiki/Network%20allocation%20vector,"The network allocation vector (NAV) is a virtual carrier-sensing mechanism used with wireless network protocols such as IEEE 802.11 (Wi-Fi) and IEEE 802.16 (WiMax). The virtual carrier-sensing is a logical abstraction which limits the need for physical carrier-sensing at the air interface in order to save power. The MAC layer frame headers contain a duration field that specifies the transmission time required for the frame, in which time the medium will be busy. The stations listening on the wireless medium read the Duration field and set their NAV, which is an indicator for a station on how long it must defer from accessing the medium. The NAV may be thought of as a counter, which counts down to zero at a uniform rate. When the counter is zero, the virtual carrier-sensing indication is that the medium is idle; when nonzero, the indication is busy. The medium shall be determined to be busy when the station (STA) is transmitting. In IEEE 802.11, the NAV represents the number of microseconds the sending STA intends to hold the medium busy (maximum of 32,767 microseconds). When the sender sends a Request to Send the receiver waits one SIFS before sending Clear to Send. Then the sender will wait again one SIFS before sending all the data. Again the receiver will wait a SIFS before sending ACK. So NAV is the duration from the first SIFS to the ending of ACK. During this time the medium is considered busy. Wireless stations are often battery-powered, so to conserve power the stations may enter a power-saving mode. A station decrements its NAV counter until it becomes zero, at which time it is awakened to sense the medium again. The NAV virtual carrier sensing mechanism is a prominent part of the CSMA/CA MAC protocol used with IEEE 802.11 WLANs. NAV is used in DCF, PCF and HCF. Media access control Computer networking" https://en.wikipedia.org/wiki/Rimose,"Rimose is an adjective used to describe a surface that is cracked or fissured. The term is often used in describing crustose lichens. A rimose surface of a lichen is sometimes contrasted to the surface being areolate. Areolate is an extreme form of being rimose, where the cracks or fissures are so deep that they create island-like pieces called areoles, which look the ""islands"" of mud on the surface of a dry lake bed. Rimose and areolate are contrasted with being verrucose, or ""warty"". Verrucose surfaces have warty bumps which are distinct, but not separated by cracks. In mycology the term describes mushrooms whose caps crack in a radial pattern, as commonly found in the genera Inocybe and Inosperma." https://en.wikipedia.org/wiki/Cutler%27s%20bar%20notation,"In mathematics, Cutler's bar notation is a notation system for large numbers, introduced by Mark Cutler in 2004. The idea is based on iterated exponentiation in much the same way that exponentiation is iterated multiplication. Introduction A regular exponential can be expressed as such: However, these expressions become arbitrarily large when dealing with systems such as Knuth's up-arrow notation. Take the following: Cutler's bar notation shifts these exponentials counterclockwise, forming . A bar is placed above the variable to denote this change. As such: This system becomes effective with multiple exponents, when regular denotation becomes too cumbersome. At any time, this can be further shortened by rotating the exponential counterclockwise once more. The same pattern could be iterated a fourth time, becoming . For this reason, it is sometimes referred to as Cutler's circular notation. Advantages and drawbacks The Cutler bar notation can be used to easily express other notation systems in exponent form. It also allows for a flexible summarization of multiple copies of the same exponents, where any number of stacked exponents can be shifted counterclockwise and shortened to a single variable. The bar notation also allows for fairly rapid composure of very large numbers. For instance, the number would contain more than a googolplex digits, while remaining fairly simple to write with and remember. However, the system reaches a problem when dealing with different exponents in a single expression. For instance, the expression could not be summarized in bar notation. Additionally, the exponent can only be shifted thrice before it returns to its original position, making a five degree shift indistinguishable from a one degree shift. Some have suggested using a double and triple bar in subsequent rotations, though this presents problems when dealing with ten- and twenty-degree shifts. Other equivalent notations for the same operations already exis" https://en.wikipedia.org/wiki/List%20of%20Foucault%20pendulums,"This is a list of Foucault pendulums in the world: Europe Austria Technisches Museum Wien, Vienna St. Ruprecht an der Raab, Styria, erected in 2001 in a slim stainless steel pyramid, partially with glass windows; it is worldwide the first to exist outside a closed building: on the street. - Length: 6.5 m, weight: 32 kg Belarus Belarus State Pedagogic University, Minsk Belgium Volkssterrenwacht Mira, Grimbergen Technopolis, Mechelen Festraetsstudio, Sint-Truiden UGent-volkssterrenwacht Armand Pien Ghent Bulgaria Public Astronomical Observatory and Planetarium ""Nicolaus Copernicus"", Varna - Length: 14.4 m Czech Republic Observatory and Planetarium Hradec Králové, Hradec Králové - Length: 10 m, weight: 8.5 kg Czech Technical University, Prague - Length: 21 m, weight: 34 kg Rotunda in Castle Flower Garden, Kroměříž - Length: 25 m, weight: 30 kg Denmark Steno Museet, Aarhus Odense Technical College, Odense Geocenter, Faculty of Science, University of Copenhagen - Length 25 m, weight: 145 kg Estonia Department of Physics, University of Tartu Finland Department of Physics, University of Turku, Turku Eurajoki - Length: 40 m, weight: 110 kg Finnish Science Centre Heureka, Vantaa The watertower of Kuusamo France Germany Jahrtausendturm, Magdeburg Gymnasium Lünen-Altlünen, Lünen Gymnasium Verl, Verl German Museum of Technology, Berlin University of Bremen University of Heidelberg Helmholtz-Gymnasium Heidelberg Hochschule für Angewandte Wissenschaften Hamburg, Hamburg School for Business and Technique, Mainz Deutsches Museum, Munich - Length: 30 m, weight: 30 kg University of Munich, Geophysics – Department of Earth and Environmental Sciences, 20 m, 12 kg, live webcam, description Münster, 48 kg, 29 m, with mirrors, Zwei Graue Doppelspiegel für ein Pendel by artist Gerhard Richter in a former church, opened 17 June 2018 University of Osnabrück, Osnabrück, Lower Saxony - Length: 19.5 m, weight: 70 kg Gymnasium of the city Lennestadt, N" https://en.wikipedia.org/wiki/Pure%20mathematics,"Pure mathematics is the study of mathematical concepts independently of any application outside mathematics. These concepts may originate in real-world concerns, and the results obtained may later turn out to be useful for practical applications, but pure mathematicians are not primarily motivated by such applications. Instead, the appeal is attributed to the intellectual challenge and aesthetic beauty of working out the logical consequences of basic principles. While pure mathematics has existed as an activity since at least ancient Greece, the concept was elaborated upon around the year 1900, after the introduction of theories with counter-intuitive properties (such as non-Euclidean geometries and Cantor's theory of infinite sets), and the discovery of apparent paradoxes (such as continuous functions that are nowhere differentiable, and Russell's paradox). This introduced the need to renew the concept of mathematical rigor and rewrite all mathematics accordingly, with a systematic use of axiomatic methods. This led many mathematicians to focus on mathematics for its own sake, that is, pure mathematics. Nevertheless, almost all mathematical theories remained motivated by problems coming from the real world or from less abstract mathematical theories. Also, many mathematical theories, which had seemed to be totally pure mathematics, were eventually used in applied areas, mainly physics and computer science. A famous early example is Isaac Newton's demonstration that his law of universal gravitation implied that planets move in orbits that are conic sections, geometrical curves that had been studied in antiquity by Apollonius. Another example is the problem of factoring large integers, which is the basis of the RSA cryptosystem, widely used to secure internet communications. It follows that, presently, the distinction between pure and applied mathematics is more a philosophical point of view or a mathematician's preference rather than a rigid subdivision of mathem" https://en.wikipedia.org/wiki/Classification%20of%20low-dimensional%20real%20Lie%20algebras,"This mathematics-related list provides Mubarakzyanov's classification of low-dimensional real Lie algebras, published in Russian in 1963. It complements the article on Lie algebra in the area of abstract algebra. An English version and review of this classification was published by Popovych et al. in 2003. Mubarakzyanov's Classification Let be -dimensional Lie algebra over the field of real numbers with generators , . For each algebra we adduce only non-zero commutators between basis elements. One-dimensional , abelian. Two-dimensional , abelian ; , solvable , Three-dimensional , abelian, Bianchi I; , decomposable solvable, Bianchi III; , Heisenberg–Weyl algebra, nilpotent, Bianchi II, , solvable, Bianchi IV, , solvable, Bianchi V, , solvable, Bianchi VI, Poincaré algebra when , , solvable, Bianchi VII, , simple, Bianchi VIII, , simple, Bianchi IX, Algebra can be considered as an extreme case of , when , forming contraction of Lie algebra. Over the field algebras , are isomorphic to and , respectively. Four-dimensional , abelian; , decomposable solvable, , decomposable solvable, , decomposable nilpotent, , decomposable solvable, , decomposable solvable, , decomposable solvable, , decomposable solvable, , unsolvable, , unsolvable, , indecomposable nilpotent, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, , indecomposable solvable, Algebra can be considered as an extreme case of , when , forming contraction of Lie algebra. Over the field algebras , , , , are isomorphic to , , , , , respectively. See also Table of Lie groups Simple Lie group#Full classification Notes" https://en.wikipedia.org/wiki/Almost%20surely,"In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0. The concept is analogous to the concept of ""almost everywhere"" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely (since having a probability of 1 entails including all the sample points); however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0. Some examples of the use of this concept include the strong and uniform versions of the law of large numbers, the continuity of the paths of Brownian motion, and the infinite monkey theorem. The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely: an event that happens with probability zero happens almost never. Formal definition Let be a probability space. An event happens almost surely if . Equivalently, happens almost surely if the probability of not occurring is zero: . More generally, any event (not necessarily in ) happens almost surely if is contained in a null set: a subset in such that The notion of almost sureness depends on the probability measure . If it is necessary to emphasize this dependence, it is customary to say that the event occurs P-almost surely, or almost surely . Illustrative examples In general, an event can happen ""almost surely"", even if the probability space in question includes outcomes which do not belong to the event—as the following examples illustrate. Throwing a dart Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally lik" https://en.wikipedia.org/wiki/Embedded%20hypervisor,"An embedded hypervisor is a hypervisor that supports the requirements of embedded systems. The requirements for an embedded hypervisor are distinct from hypervisors targeting server and desktop applications. An embedded hypervisor is designed into the embedded device from the outset, rather than loaded subsequent to device deployment. While desktop and enterprise environments use hypervisors to consolidate hardware and isolate computing environments from one another, in an embedded system, the various components typically function collectively to provide the device's functionality. Mobile virtualization overlaps with embedded system virtualization, and shares some use cases. Typical attributes of embedded virtualization include efficiency, security, communication, isolation and real-time capabilities. Background Software virtualization has been a major topic in the enterprise space since the late 1960s, but only since the early 2000s has its use appeared in embedded systems. The use of virtualization and its implementation in the form of a hypervisor in embedded systems are very different from enterprise applications. An effective implementation of an embedded hypervisor must deal with a number of issues specific to such applications. These issues include the highly integrated nature of embedded systems, the requirement for isolated functional blocks within the system to communicate rapidly, the need for real-time/deterministic performance, the resource-constrained target environment and the wide range of security and reliability requirements. Hypervisor A hypervisor provides one or more software virtualization environments in which other software, including operating systems, can run with the appearance of full access to the underlying system hardware, where in fact such access is under the complete control of the hypervisor. These virtual environments are called virtual machines (VM)s, and a hypervisor will typically support multiple VMs managed simultane" https://en.wikipedia.org/wiki/Covariance%20group,"In physics, a covariance group is a group of coordinate transformations between frames of reference (see for example Ryckman (2005)). A frame of reference provides a set of coordinates for an observer moving with that frame to make measurements and define physical quantities. The covariance principle states the laws of physics should transform from one frame to another covariantly, that is, according to a representation of the covariance group. Special relativity considers observers in inertial frames, and the covariance group consists of rotations, velocity boosts, and the parity transformation. It is denoted as O(1,3) and is often referred to as Lorentz group. For example, the Maxwell equation with sources, transforms as a four-vector, that is, under the (1/2,1/2) representation of the O(1,3) group. The Dirac equation, transforms as a bispinor, that is, under the (1/2,0)⊕(0,1/2) representation of the O(1,3) group. The covariance principle, unlike the relativity principle, does not imply that the equations are invariant under transformations from the covariance group. In practice the equations for electromagnetic and strong interactions are invariant, while the weak interaction is not invariant under the parity transformation. For example, the Maxwell equation is invariant, while the corresponding equation for the weak field explicitly contains left currents and thus is not invariant under the parity transformation. In general relativity the covariance group consists of all arbitrary (invertible and differentiable) coordinate transformations. See also Manifestly covariant Relativistic wave equations Representation theory of the Lorentz group Notes" https://en.wikipedia.org/wiki/Numbers%20%28TV%20series%29,"Numbers (stylized as NUMB3RS) is an American crime drama television series that was broadcast on CBS from January 23, 2005, to March 12, 2010, for six seasons and 118 episodes. The series was created by Nicolas Falacci and Cheryl Heuton, and follows FBI Special Agent Don Eppes (Rob Morrow) and his brother Charlie Eppes (David Krumholtz), a college mathematics professor and prodigy, who helps Don solve crimes for the FBI. Brothers Ridley and Tony Scott produced Numbers; its production companies are the Scott brothers' Scott Free Productions and CBS Television Studios (originally Paramount Network Television, and later CBS Paramount Network Television). The show focuses equally on the relationships among Don Eppes, his brother Charlie Eppes, and their father, Alan Eppes (Judd Hirsch), and on the brothers' efforts to fight crime, usually in Los Angeles. A typical episode begins with a crime, which is subsequently investigated by a team of FBI agents led by Don and mathematically modeled by Charlie, with the help of Larry Fleinhardt (Peter MacNicol) and Amita Ramanujan (Navi Rawat). The insights provided by Charlie's mathematics were always in some way crucial to solving the crime. On May 18, 2010, CBS canceled the series after six seasons. Cast and characters The show revolved around three intersecting groups of characters: the FBI, scientists at the fictitious California Institute of Science (CalSci), and the Eppes family. Don Eppes (Rob Morrow), Charlie's older brother, is the lead FBI agent at the Los Angeles Violent Crimes Squad. Professor Charlie Eppes (David Krumholtz) is a mathematical genius, who in addition to teaching at CalSci, consults for the FBI and NSA. Alan Eppes (Judd Hirsch) is a former L.A. city planner, a widower, and the father of both Charlie and Don Eppes. Alan lives in a historic two-story California bungalow furnished with period Arts and Crafts furniture. David Sinclair (Alimi Ballard) is an FBI field agent and was later made Don's se" https://en.wikipedia.org/wiki/National%20Association%20of%20Biology%20Teachers,"The National Association of Biology Teachers (NABT) is an incorporated association of biology educators in the United States. It was initially founded in response to the poor understanding of biology and the decline in the teaching of the subject in the 1930s. It has grown to become a national representative organisation which promotes the teaching of biology, supports the learning of biology based on scientific principles and advocates for biology within American society. The National Conference and the journal, The American Biology Teacher, are two mechanisms used to achieve those goals. The NABT has also been an advocate for the teaching of evolution in the debate about creation and evolution in public education in the United States, playing a role in a number of court cases and hearings throughout the country. History The NABT was formed in 1938 in New York City. The journal of the organisation (The American Biology Teacher) was created in the same year. In 1944, Helen Trowbridge, the first female president, was elected. The Outstanding Teacher Awards were first presented in 1960 and the first independent National Convention was held in 1968. The seventies marked an era of activism in the teaching of evolution with legal action against a state code amendment in Tennessee which required equal amounts of time to teach evolution and creationism. In 1987 NABT helped develop the first National High School Biology test which established a list of nine core principles in the teaching of biology. In the year 2005, NABT was involved in the Kitzmiller v. Dover Area School District case which established the principle that Intelligent Design had no place in the Science Curriculum. 2017 was the Year of the March for Science, which the NABT endorsed, and in 2018, it held its annual four-day conference in San Diego, California. Purpose The purpose of the NABT is to ""empower educators to provide the best possible biology and life science education for all students"". The org" https://en.wikipedia.org/wiki/Email%20art,"Email art refers to artwork created for the medium of email. It includes computer graphics, animations, screensavers, digital scans of artwork in other media, and even ASCII art. When exhibited, Email art can be either displayed on a computer screen or similar type of display device, or the work can be printed out and displayed. Email art is an evolution of the networking Mail Art movement and began during the early 1990s. Chuck Welch, also known as Cracker Jack Kid, connected with early online artists and created a net-worker telenetlink. The historical evolution of the term ""Email art"" is documented in Chuck Welch's Eternal Network: A Mail Art Anthology published and edited by University of Calgary Press. By the end of the 1990s, many mailartists, aware of increasing postal rates and cheaper internet access, were beginning the gradual migration of collective art projects towards the web and new, inexpensive forms of digital communication. The Internet facilitated faster dissemination of Mail Art calls (invitations), Mail Art blogs and websites have become commonly used to display contributions and online documentation, and an increasing number of projects include an invitation to submit Email art digitally, either as the preferred channel or as an alternative to sending contributions by post. In 2006, Ramzi Turki received an e-mail containing a scanned work of Belgian artist Luc Fierens, so he sent this picture to about 7000 e-mail addresses artists seeking their interactions in order to acquire about 200 contributions and answers. See also Cyberculture Digital art Fax art Internet art Mail art" https://en.wikipedia.org/wiki/Feller%E2%80%93Tornier%20constant,"In mathematics, the Feller–Tornier constant CFT is the density of the set of all positive integers that have an even number of distinct prime factors raised to a power larger than one (ignoring any prime factors which appear only to the first power). It is named after William Feller (1906–1970) and Erhard Tornier (1894–1982) Omega function The Big Omega function is given by See also: Prime omega function. The Iverson bracket is With these notations, we have Prime zeta function The prime zeta function P is give by The Feller–Tornier constant satisfies See also Riemann zeta function L-function Euler product Twin prime" https://en.wikipedia.org/wiki/Content%20delivery%20network%20interconnection,"Content delivery network interconnection (CDNI) is a set of interfaces and mechanisms required for interconnecting two independent content delivery networks (CDNs) that enables one to deliver content on behalf of the other. Interconnected CDNs offer many benefits, such as footprint extension, reduced infrastructure costs, higher availability, etc., for content service providers (CSPs), CDNs, and end users. Among its many use cases, it allows small CDNs to interconnect and provides services for CSPs that allows them to compete against the CDNs of global CSPs. Rationale Thanks to the many benefits of CDNs, e.g. reduced delivery cost, improved quality of experience (QoE), and increased robustness of delivery, CDNs have become popular for large-scale content delivery of cacheable content. For this reason, CDN providers are scaling up their infrastructure and many Internet service providers (ISPs)/network service providers (NSPs) have deployed or are deploying their own CDNs for their own use or for lease, if a business and technical arrangement between them and a CDN provider were made. Those stand-alone CDNs with well-defined request routing, delivery, acquisition, accounting systems and protocols may sooner or later face either footprint, resource or capability limits. The CDNI targets at leveraging separate CDNs to provide end-to-end delivery of content from CSPs to end users, regardless of their location or attachment network. Example of operation Let's consider an interconnection of two CDNs as presented in the below figure. The ISP-A deploys an authoritative upstream CDN (uCDN), and he has established a technical and business arrangement with the CSP. Because the CDN-A is authorised to serve on behalf of the CSP, a user in the network of ISP-B requests content from CDN-A (1). The uCDN can either serve the request itself or redirect it to a downstream CDN (dCDN) if, for example, dCDN is closer to the user equipment (UE). If the request is redirected, the inter" https://en.wikipedia.org/wiki/Aseptic%20sampling,"Aseptic sampling is the process of aseptically withdrawing materials used in biopharmaceutical processes for analysis so as not contaminate or alter the sample or the source of the sample. Aseptic samples are drawn throughout the entire biopharmaceutical process (cell culture/fermentation, buffer & media prep, purification, final fill and finish). Analysis of the sample includes sterility, cell count/cell viability, metabolites, gases, osmolality and more. Aseptic sampling techniques Biopharmaceutical drug manufacturers widely use aseptic sampling devices to enhance aseptic technique. The latest innovations of sampling devices harmonize with emerging trends in disposability, enhance operating efficiencies and improve operator safety. Turn-key aseptic sampling devices Turn-key Aseptic Sampling Devices are ready-to-use sampling devices that require little or no equipment preparation by the users. Turn-key devices help managers reduce labor costs, estimated to represent 75% to 80% of the cost of running a biotech facility. Turn-key aseptic sampling devices include: A means to connect the device to the bioprocess equipment A mechanism to aseptically access the materials held in the biopress equipment A means to aseptically transfer the sample out of the bioprocess equipment A vessel or container to aseptically collect the sample A mechanism to aseptically disconnect the collection vessel To protect the integrity of the sample and to ensure it is truly representative of the time the sample is taken, the sampling pathway should be fully contained and independent of other sampling pathways. Cannula(needle) based aseptic sampling devices In a cannula-based aseptic sampling system, a needle penetrates an elastomeric septum. The septum is in direct contact with the liquid so that the liquid flows out of the equipment through the needle. Iterations of this technique are used in medical device industries but don't usually include equipment combining the needle an" https://en.wikipedia.org/wiki/Geophysical%20MASINT,"Geophysical MASINT is a branch of Measurement and Signature Intelligence (MASINT) that involves phenomena transmitted through the earth (ground, water, atmosphere) and manmade structures including emitted or reflected sounds, pressure waves, vibrations, and magnetic field or ionosphere disturbances. According to the United States Department of Defense, MASINT has technically derived intelligence (excluding traditional imagery IMINT and signals intelligence SIGINT) that – when collected, processed, and analyzed by dedicated MASINT systems – results in intelligence that detects, tracks, identifies or describes the signatures (distinctive characteristics) of fixed or dynamic target sources. MASINT was recognized as a formal intelligence discipline in 1986. Another way to describe MASINT is a ""non-literal"" discipline. It feeds on a target's unintended emissive by-products, the ""trails"" - the spectral, chemical or RF that an object leaves behind. These trails form distinct signatures, which can be exploited as reliable discriminators to characterize specific events or disclose hidden targets."" As with many branches of MASINT, specific techniques may overlap with the six major conceptual disciplines of MASINT defined by the Center for MASINT Studies and Research, which divides MASINT into Electro-optical, Nuclear, Geophysical, Radar, Materials, and Radiofrequency disciplines. Military requirements Geophysical sensors have a long history in conventional military and commercial applications, from weather prediction for sailing, to fish finding for commercial fisheries, to nuclear test ban verification. New challenges, however, keep emerging. For first-world military forces opposing other conventional militaries, there is an assumption that if a target can be located, it can be destroyed. As a result, concealment and deception have taken on new criticality. ""Stealth"" low-observability aircraft have gotten much attention, and new surface ship designs feature observabili" https://en.wikipedia.org/wiki/Security%20hacker,"A security hacker is someone who explores methods for breaching defenses and exploiting weaknesses in a computer system or network. Hackers may be motivated by a multitude of reasons, such as profit, protest, information gathering, challenge, recreation, or evaluation of a system weaknesses to assist in formulating defenses against potential hackers. Longstanding controversy surrounds the meaning of the term ""hacker."" In this controversy, computer programmers reclaim the term hacker, arguing that it refers simply to someone with an advanced understanding of computers and computer networks, and that cracker is the more appropriate term for those who break into computers, whether computer criminals (black hats) or computer security experts (white hats). A 2014 article noted that ""the black-hat meaning still prevails among the general public"". The subculture that has evolved around hackers is often referred to as the ""computer underground"". History Birth of subculture and entering mainstream: 1960s-1980s The subculture around such hackers is termed network hacker subculture, hacker scene, or computer underground. It initially developed in the context of phreaking during the 1960s and the microcomputer BBS scene of the 1980s. It is implicated with 2600: The Hacker Quarterly and the alt.2600 newsgroup. In 1980, an article in the August issue of Psychology Today (with commentary by Philip Zimbardo) used the term ""hacker"" in its title: ""The Hacker Papers."" It was an excerpt from a Stanford Bulletin Board discussion on the addictive nature of computer use. In the 1982 film Tron, Kevin Flynn (Jeff Bridges) describes his intentions to break into ENCOM's computer system, saying ""I've been doing a little hacking here."" CLU is the software he uses for this. By 1983, hacking in the sense of breaking computer security had already been in use as computer jargon, but there was no public awareness about such activities. However, the release of the film WarGames that year, featuri" https://en.wikipedia.org/wiki/Iron%20in%20biology,"Iron is an important biological element. It is used in both the ubiquitous Iron-sulfur proteins and in Vertebrates it is used in Hemoglobin which is essential for Blood and oxygen transport. Overview Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and used of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that Iron was a central component of the environment of early life. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable comple" https://en.wikipedia.org/wiki/Flash%20memory%20controller,"A flash memory controller (or flash controller) manages data stored on flash memory (usually NAND flash) and communicates with a computer or electronic device. Flash memory controllers can be designed for operating in low duty-cycle environments like memory cards, or other similar media for use in PDAs, mobile phones, etc. USB flash drives use flash memory controllers designed to communicate with personal computers through the USB port at a low duty-cycle. Flash controllers can also be designed for higher duty-cycle environments like solid-state drives (SSD) used as data storage for laptop computer systems up to mission-critical enterprise storage arrays. Initial setup After a flash storage device is initially manufactured, the flash controller is first used to format the flash memory. This ensures the device is operating properly, it maps out bad flash memory cells, and it allocates spare cells to be substituted for future failed cells. Some part of the spare cells is also used to hold the firmware which operates the controller and other special features for a particular storage device. A directory structure is created to allow the controller to convert requests for logical sectors into the physical locations on the actual flash memory chips. Reading, writing, and erasing When the system or device needs to read data from or write data to the flash memory, it will communicate with the flash memory controller. Simpler devices like SD cards and USB flash drives typically have a small number of flash memory die connected simultaneously. Operations are limited to the speed of the individual flash memory die. In contrast, a high-performance solid-state drive will have more dies organized with parallel communication paths to enable speeds many times greater than that of a single flash die. Wear-leveling and block picking Flash memory can withstand a limited number of program-erase cycles. If a particular flash memory block were programmed and erased repeatedly withou" https://en.wikipedia.org/wiki/Comparison%20of%20CPU%20microarchitectures,"The following is a comparison of CPU microarchitectures. See also Processor design Comparison of instruction set architectures Notes" https://en.wikipedia.org/wiki/Decorrelation,"Decorrelation is a general term for any process that is used to reduce autocorrelation within a signal, or cross-correlation within a set of signals, while preserving other aspects of the signal. A frequently used method of decorrelation is the use of a matched linear filter to reduce the autocorrelation of a signal as far as possible. Since the minimum possible autocorrelation for a given signal energy is achieved by equalising the power spectrum of the signal to be similar to that of a white noise signal, this is often referred to as signal whitening. Process Although most decorrelation algorithms are linear, non-linear decorrelation algorithms also exist. Many data compression algorithms incorporate a decorrelation stage. For example, many transform coders first apply a fixed linear transformation that would, on average, have the effect of decorrelating a typical signal of the class to be coded, prior to any later processing. This is typically a Karhunen–Loève transform, or a simplified approximation such as the discrete cosine transform. By comparison, sub-band coders do not generally have an explicit decorrelation step, but instead exploit the already-existing reduced correlation within each of the sub-bands of the signal, due to the relative flatness of each sub-band of the power spectrum in many classes of signals. Linear predictive coders can be modelled as an attempt to decorrelate signals by subtracting the best possible linear prediction from the input signal, leaving a whitened residual signal. Decorrelation techniques can also be used for many other purposes, such as reducing crosstalk in a multi-channel signal, or in the design of echo cancellers. In image processing decorrelation techniques can be used to enhance or stretch, colour differences found in each pixel of an image. This is generally termed as 'decorrelation stretching'. The concept of decorrelation can be applied in many other fields. In neuroscience, decorrelation is used in the an" https://en.wikipedia.org/wiki/Mutualism%20%28biology%29,"Mutualism describes the ecological interaction between two or more species where each species has a net benefit. Mutualism is a common type of ecological interaction. Prominent examples include most vascular plants engaged in mutualistic interactions with mycorrhizae, flowering plants being pollinated by animals, vascular plants being dispersed by animals, and corals with zooxanthellae, among many others. Mutualism can be contrasted with interspecific competition, in which each species experiences reduced fitness, and exploitation, or parasitism, in which one species benefits at the expense of the other. The term mutualism was introduced by Pierre-Joseph van Beneden in his 1876 book Animal Parasites and Messmates to mean ""mutual aid among species"". Mutualism is often conflated with two other types of ecological phenomena: cooperation and symbiosis. Cooperation most commonly refers to increases in fitness through within-species (intraspecific) interactions, although it has been used (especially in the past) to refer to mutualistic interactions, and it is sometimes used to refer to mutualistic interactions that are not obligate. Symbiosis involves two species living in close physical contact over a long period of their existence and may be mutualistic, parasitic, or commensal, so symbiotic relationships are not always mutualistic, and mutualistic interactions are not always symbiotic. Despite a different definition between mutualistic interactions and symbiosis, mutualistic and symbiosis have been largely used interchangeably in the past, and confusion on their use has persisted. Mutualism plays a key part in ecology and evolution. For example, mutualistic interactions are vital for terrestrial ecosystem function as about 80% of land plants species rely on mycorrhizal relationships with fungi to provide them with inorganic compounds and trace elements. As another example, the estimate of tropical rainforest plants with seed dispersal mutualisms with animals ranges " https://en.wikipedia.org/wiki/Three-domain%20system,"The three-domain system is a biological classification introduced by Carl Woese, Otto Kandler, and Mark Wheelis in 1990 that divides cellular life forms into three domains, namely Archaea, Bacteria, and Eukarya. The key difference from earlier classifications such as the two-empire system and the five-kingdom classification is the splitting of Archaea from Bacteria as completely different organisms. It has been challenged by the two-domain system that divides organisms into Bacteria and Archaea only, as Eukaryotes are considered as one group of Archaea. Background Woese argued, on the basis of differences in 16S rRNA genes, that bacteria, archaea, and eukaryotes each arose separately from an ancestor with poorly developed genetic machinery, often called a progenote. To reflect these primary lines of descent, he treated each as a domain, divided into several different kingdoms. Originally his split of the prokaryotes was into Eubacteria (now Bacteria) and Archaebacteria (now Archaea). Woese initially used the term ""kingdom"" to refer to the three primary phylogenic groupings, and this nomenclature was widely used until the term ""domain"" was adopted in 1990. Acceptance of the validity of Woese's phylogenetically valid classification was a slow process. Prominent biologists including Salvador Luria and Ernst Mayr objected to his division of the prokaryotes. Not all criticism of him was restricted to the scientific level. A decade of labor-intensive oligonucleotide cataloging left him with a reputation as ""a crank"", and Woese would go on to be dubbed ""Microbiology's Scarred Revolutionary"" by a news article printed in the journal Science in 1997. The growing amount of supporting data led the scientific community to accept the Archaea by the mid-1980s. Today, very few scientists still accept the concept of a unified Prokarya. Classification The three-domain system adds a level of classification (the domains) ""above"" the kingdoms present in the previously used five- or" https://en.wikipedia.org/wiki/Radisys,"Radisys Corporation is an American technology company located in Hillsboro, Oregon, United States that makes technology used by telecommunications companies in mobile networks. Founded in 1987 in Oregon by former employees of Intel, the company went public in 1995. The company's products are used in mobile network applications such as small cell radio access networks, wireless core network elements, deep packet inspection and policy management equipment; conferencing, and media services including voice, video and data. In 2015, the first-quarter revenues of Radisys totaled $48.7 million, and approximately employed 700 people. Arun Bhikshesvaran is the company's chief executive officer. On 30 June 2018, multinational conglomerate Reliance Industries acquired Radisys for $74 million. It now operates as an independent subsidiary. History Radisys was founded in 1987 as Radix Microsystems in Beaverton, Oregon, by former Intel engineers Dave Budde and Glen Myers. The first investors were employees who put up $50,000 each, with Tektronix later investing additional funds into the company. Originally located in space leased from Sequent Computer Systems, by 1994 the company had grown to annual sales of $20 million. The company's products were computers used in end products such as automated teller machines to paint mixers. On October 20, 1995, the company became a publicly traded company when it held an initial public offering (IPO). The IPO raised $19.6 million for Radisys after selling 2.7 million shares at $12 per share. In 1996, the company moved its headquarters to a new campus in Hillsboro, and at that time sales reached $80 million and the company had a profit of $9.6 million that year with 175 employees. Company co-founder Dave Budde left the company in 1997, with company revenues at $81 million annually at that time. The company grew in part by acquisitions such as Sonitech International in 1997, part of IBM's Open Computing Platform unit and Texas Micro in 1999" https://en.wikipedia.org/wiki/CMOS%20amplifier,"CMOS amplifiers (complementary metal–oxide–semiconductor amplifiers) are ubiquitous analog circuits used in computers, audio systems, smartphones, cameras, telecommunication systems, biomedical circuits, and many other systems. Their performance impacts the overall specifications of the systems. They take their name from the use of MOSFETs (metal–oxide–semiconductor field-effect transistors) as opposite to bipolar junction transistors (BJTs). MOSFETs are simpler to fabricate and therefore less expensive than BJT amplifiers, still providing a sufficiently high transconductance to allow the design of very high performance circuits. In high performance CMOS (complementary metal–oxide–semiconductor) amplifier circuits, transistors are not only used to amplify the signal but are also used as active loads to achieve higher gain and output swing in comparison with resistive loads. CMOS technology was introduced primarily for digital circuit design. In the last few decades, to improve speed, power consumption, required area, and other aspects of digital integrated circuits (ICs), the feature size of MOSFET transistors has shrunk (minimum channel length of transistors reduces in newer CMOS technologies). This phenomenon predicted by Gordon Moore in 1975, which is called Moore’s law, and states that in about each 2 years, the number of transistors doubles for the same silicon area of ICs. Progress in memory circuits design is an interesting example to see how process advancement have affected the required size and their performance in the last decades. In 1956, a 5 MB Hard Disk Drive (HDD) weighed over a ton, while these days having 50000 times more capacity with a weight of several tens of grams is very common. While digital ICs have benefited from the feature size shrinking, analog CMOS amplifiers have not gained corresponding advantages due to the intrinsic limitations of an analog design—such as the intrinsic gain reduction of short channel transistors, which affects th" https://en.wikipedia.org/wiki/Zermelo%27s%20theorem%20%28game%20theory%29,"In game theory, Zermelo's theorem is a theorem about finite two-person games of perfect information in which the players move alternately and in which chance does not affect the decision making process. It says that if the game cannot end in a draw, then one of the two players must have a winning strategy (i.e. can force a win). An alternate statement is that for a game meeting all of these conditions except the condition that a draw is now possible, then either the first-player can force a win, or the second-player can force a win, or both players can at least force a draw. The theorem is named after Ernst Zermelo, a German mathematician and logician, who proved the theorem for the example game of chess in 1913. Example Zermelo's Theorem can be applied to all finite-stage two-player games with complete information and alternating moves. The game must satisfy the following criteria: there are two players in the game; the game is of perfect information; the board game is finite; the two players can take alternate turns; and there is no chance element present. Zermelo has stated that there are many games of this type however his theorem has been applied mostly to the game chess. When applied to chess, Zermelo's Theorem states ""either White can force a win, or Black can force a win, or both sides can force at least a draw"". Zermelo's algorithm is a cornerstone algorithm in game-theory, however, it can also be applied in areas outside of finite games. Apart from chess, Zermelo's theorem is applied across all areas of computer science. In particular, it is applied in model checking and value interaction. Conclusions of Zermelo's theorem Zermelo's work shows that in two-person zero-sum games with perfect information, if a player is in a winning position, then that player can always force a win no matter what strategy the other player may employ. Furthermore, and as a consequence, if a player is in a winning position, it will never require more moves than there are " https://en.wikipedia.org/wiki/Return%20ratio,"The return ratio of a dependent source in a linear electrical circuit is the negative of the ratio of the current (voltage) returned to the site of the dependent source to the current (voltage) of a replacement independent source. The terms loop gain and return ratio are often used interchangeably; however, they are necessarily equivalent only in the case of a single feedback loop system with unilateral blocks. Calculating the return ratio The steps for calculating the return ratio of a source are as follows: Set all independent sources to zero. Select the dependent source for which the return ratio is sought. Place an independent source of the same type (voltage or current) and polarity in parallel with the selected dependent source. Move the dependent source to the side of the inserted source and cut the two leads joining the dependent source to the independent source. For a voltage source the return ratio is minus the ratio of the voltage across the dependent source divided by the voltage of the independent replacement source. For a current source, short-circuit the broken leads of the dependent source. The return ratio is minus the ratio of the resulting short-circuit current to the current of the independent replacement source. Other Methods These steps may not be feasible when the dependent sources inside the devices are not directly accessible, for example when using built-in ""black box"" SPICE models or when measuring the return ratio experimentally. For SPICE simulations, one potential workaround is to manually replace non-linear devices by their small-signal equivalent model, with exposed dependent sources. However this will have to be redone if the bias point changes. A result by Rosenstark shows that return ratio can be calculated by breaking the loop at any unilateral point in the circuit. The problem is now finding how to break the loop without affecting the bias point and altering the results. Middlebrook and Rosenstark have propose" https://en.wikipedia.org/wiki/List%20of%20physical%20constants,"The constants listed here are known values of physical constants expressed in SI units; that is, physical quantities that are generally believed to be universal in nature and thus are independent of the unit system in which they are measured. Many of these are redundant, in the sense that they obey a known relationship with other physical constants and can be determined from them. Table of physical constants Uncertainties While the values of the physical constants are independent of the system of units in use, each uncertainty as stated reflects our lack of knowledge of the corresponding value as expressed in SI units, and is strongly dependent on how those units are defined. For example, the atomic mass constant is exactly known when expressed using the dalton (its value is exactly 1 Da), but the kilogram is not exactly known when using these units, the opposite of when expressing the same quantities using the kilogram. Technical constants Some of these constants are of a technical nature and do not give any true physical property, but they are included for convenience. Such a constant gives the correspondence ratio of a technical dimension with its corresponding underlying physical dimension. These include the Boltzmann constant , which gives the correspondence of the dimension temperature to the dimension of energy per degree of freedom, and the Avogadro constant , which gives the correspondence of the dimension of amount of substance with the dimension of count of entities (the latter formally regarded in the SI as being dimensionless). By implication, any product of powers of such constants is also such a constant, such as the molar gas constant . See also List of mathematical constants Physical constant List of particles Notes" https://en.wikipedia.org/wiki/Penrose%20tiling,"A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and a tiling is aperiodic if it does not contain arbitrarily large periodic regions or patches. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s. There are several different variations of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together in a way that avoids periodic tiling. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings. Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles. The study of these tilings has been important in the understanding of physical materials that also form quasicrystals. Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown. Background and history Periodic and aperiodic tilings Covering a flat surface (""" https://en.wikipedia.org/wiki/Magnes%20the%20shepherd,"Magnes the shepherd, sometimes described as Magnes the shepherd boy, is a mythological figure, possibly based on a real person, who was cited by Pliny the Elder as discovering natural magnetism. His name, ""Magnes"", the Latin word for magnetite, has been attributed as the origin of the Latin root that has passed into English, giving its speakers the words magnet, magnetism, the mentioned ore, and related formulations. Other authorities have attributed the word origin to other sources. As set out in Pliny's Naturalis Historia (""Natural History""), an early encyclopedia published c. 77 CE – c. 79 CE, and as translated from the Latin in Robert Jacobus Forbes' Studies in Ancient Technology, Pliny wrote the following (attributing the source of his information, in turn, to Nicander of Colophon): Nicander is our authority that it [magnetite ore] was called Magnes from the man who first discovered it on Mount Ida and he is said to have found it when the nails of his shoes and the ferrule of his staff adhered to it, as he was pasturing his herds. The passage appears at Book XXXVI of Naturalis Historia, covering ""The Natural History of Stones"", at chapter 25 entitled ""The Magnet: Three Remedies"". Although Pliny's description is often cited, the story of Magnes the shepherd is postulated by physicist Gillian Turner to be much older, dating from approximately 900 BCE. Any writings Nicander may have made on the subject have since been lost. Written in approximately 600 CE, book XVI of Etymologiae by Isidore of Seville tells the same story as Pliny, but places Magnes in India. This is repeated in Vincent of Beauvais' Miroir du Monde (c. 1250 CE) and in Thomas Nicols' 1652 work, Lapidary, or, the History of Pretious Stones, wherein he describes Magnes as a ""shepherd of India, who was wont to keep his flocks about those mountains in India, where there was an abundance of lodestones"". Following from Pliny's account, the shepherd's name has been often cited as giving rise to the La" https://en.wikipedia.org/wiki/Xerophile,"A xerophile () is an extremophilic organism that can grow and reproduce in conditions with a low availability of water, also known as water activity. Water activity (aw) is measured as the humidity above a substance relative to the humidity above pure water (Aw = 1.0). Xerophiles are ""xerotolerant"", meaning tolerant of dry conditions. They can often survive in environments with water activity below 0.8; above which is typical for most life on Earth. Typically xerotolerance is used with respect to matric drying, where a substance has a low water concentration. These environments include arid desert soils. The term osmotolerance is typically applied to organisms that can grow in solutions with high solute concentrations (salts, sugars), such as halophiles. The common food preservation method of reducing water (food drying) activities may not prevent the growth of xerophilic organisms, often resulting in food spoilage. Some mold and yeast species are xerophilic. Mold growth on bread is an example of food spoilage by xerophilic organisms. Examples of xerophiles include Trichosporonoides nigrescens, Zygosaccharomyces, and cacti. See also Xerocole Xerophyte" https://en.wikipedia.org/wiki/Consumer%E2%80%93resource%20interactions,"Consumer–resource interactions are the core motif of ecological food chains or food webs, and are an umbrella term for a variety of more specialized types of biological species interactions including prey-predator (see predation), host-parasite (see parasitism), plant-herbivore and victim-exploiter systems. These kinds of interactions have been studied and modeled by population ecologists for nearly a century. Species at the bottom of the food chain, such as algae and other autotrophs, consume non-biological resources, such as minerals and nutrients of various kinds, and they derive their energy from light (photons) or chemical sources. Species higher up in the food chain survive by consuming other species and can be classified by what they eat and how they obtain or find their food. Classification of consumer types The standard categorization Various terms have arisen to define consumers by what they eat, such as meat-eating carnivores, fish-eating piscivores, insect-eating insectivores, plant-eating herbivores, seed-eating granivores, and fruit-eating frugivores and omnivores are meat eaters and plant eaters. An extensive classification of consumer categories based on a list of feeding behaviors exists. The Getz categorization Another way of categorizing consumers, proposed by South African American ecologist Wayne Getz, is based on a biomass transformation web (BTW) formulation that organizes resources into five components: live and dead animal, live and dead plant, and particulate (i.e. broken down plant and animal) matter. It also distinguishes between consumers that gather their resources by moving across landscapes from those that mine their resources by becoming sessile once they have located a stock of resources large enough for them to feed on during completion of a full life history stage. In Getz's scheme, words for miners are of Greek etymology and words for gatherers are of Latin etymology. Thus a bestivore, such as a cat, preys on live animal" https://en.wikipedia.org/wiki/Altered%20nuclear%20transfer,"Altered nuclear transfer is an alternative method of obtaining embryonic-like, pluripotent stem cells without the creation and destruction of human embryos. The process was originally proposed by William B. Hurlbut. External links Explanation of the theory of Altered Nuclear Transfer Stem cell harvesting techniques Biological techniques and tools Stem cells Induced stem cells" https://en.wikipedia.org/wiki/Mycobiota,"Mycobiota (plural noun, no singular) are a group of all the fungi present in a particular geographic region (e.g. ""the mycobiota of Ireland"") or habitat type (e.g. ""the mycobiota of cocoa""). An analogous term for Mycobiota is funga. Human mycobiota Mycobiota exist on the surface and in the gastrointestinal system of humans. There are as many as sixty-six genera and 184 species in the gastrointestinal tract of healthy people. Most of these are in the Candida genera. Though found to be present on the skin and in the gi tract in healthy individuals, the normal resident mycobiota can become pathogenic in those who are immunocompromized. Such multispecies infections lead to higher mortalities. In addition hospital-acquired infections by C. albicans have become a cause of major health concerns. A high mortality rate of 40-60% is associated with systemic infection. The best-studied of these are Candida species due to their ability to become pathogenic in immunocompromised and even in healthy hosts. Yeasts are also present on the skin, such as Malassezia species, where they consume oils secreted from the sebaceous glands. Pityrosporum (Malassezia) ovale, which is lipid-dependent and found only on humans. P. ovale was later divided into two species, P. ovale and P. orbiculare, but current sources consider these terms to refer to a single species of fungus, with M. furfur the preferred name. Other uses There is a peer reviewed mycological journal titled Mycobiota." https://en.wikipedia.org/wiki/Continuous%20Computing,"Continuous Computing was a privately held company based in San Diego and founded in 1998 that provides telecom systems made up of telecom platforms and Trillium software, including protocol software stacks for femtocells and 4G wireless / Long Term Evolution (LTE). The company also sells standalone Trillium software products and ATCA hardware components, as well as professional services. Continuous Computing's Trillium software addresses LTE Femtocells (Home eNodeB) and pico / macro eNodeBs, as well as the Evolved Packet Core (EPC), Mobility Management Entity (MME), Serving Gateway (SWG) and Evolved Packet Data Gateway (ePDG). The company is said to be the first systems vendor to introduce an end-to-end offering that spans the range of LTE network infrastructure from the Home NodeB (Macro / Pico base stations) to the Evolved Packet Core (EPC). History In February 2003, Continuous Computing acquired Trillium Digital Systems' intellectual property, customers and also hired some Trillium engineering, sales and marketing staff from Intel Corporation. In July 2004, Continuous Computing expanded with the opening of a major software development center in Bangalore, India. The company acquired key products, people, technology and other assets from China-based UP Technologies Ltd. in July 2005. In October 2007, the company launched ""FlexTCA"" platforms, targeting the security and wireless core vertical telecom markets. In February 2008, Continuous Computing announced the availability of its upgraded Trillium 3G / 4G Wireless protocol software for comprehensive support of Universal Mobile Telecommunications System (UMTS) High-Speed Packet Access (HSPA) functionality in alignment with 3GPP Release 7 standards. These performance improvements increase the data rates and bandwidth over the air interface in 3G networks. Continuous Computing also announced in February 2008 their partnership with picoChip Designs Ltd. This partnership was created to speed the development of the" https://en.wikipedia.org/wiki/Chlororespiration,"Chlororespiration is a respiratory process that takes place within plants. Inside plant cells there is an organelle called the chloroplast which is surrounded by the thylakoid membrane. This membrane contains an enzyme called NAD(P)H dehydrogenase which transfers electrons in a linear chain to oxygen molecules. This electron transport chain (ETC) within the chloroplast also interacts with those in the mitochondria where respiration takes place. Photosynthesis is also a process that Chlororespiration interacts with. If photosynthesis is inhibited by environmental stressors like water deficit, increased heat, and/or increased/decreased light exposure, or even chilling stress then chlororespiration is one of the crucial ways that plants use to compensate for chemical energy synthesis. Chlororespiration – the latest model Initially, the presence of chlororespiration as a legitimate respiratory process in plants was heavily doubted. However, experimentation on Chlamydomonas reinhardtii, discovered Plastoquinone (PQ) to be a redox carrier. The role of this redox carrier is to transport electrons from the NAD(P)H enzyme to oxygen molecules on the thylakoid membrane. Using this cyclic electron chain around photosystem one (PS I), chlororespiration compensates for the lack of light. This cyclic pathway also allows electrons to re-enter the PQ pool through NAD(P)H enzyme activity and production, which is then used to supply ATP molecules (energy) to plant cells. In the year 2002, the discovery of the molecules; plastid terminal oxidase (PTOX) and NDH complexes have revolutionised the concept of chlororespiration. Using evidence from experimentation on the plant species Rosa Meillandina, this latest model observes the role of PTOX to be an enzyme that prevents the PQ pool from over-reducing, by stimulating its reoxidation. Whereas, the NDH complexes are responsible for providing a gateway for electrons to form an ETC. The presence of such molecules are apparent in the non-" https://en.wikipedia.org/wiki/In%20vitro%20models%20for%20calcification,"In vitro models for calcification may refer to systems that have been developed in order to reproduce, in the best possible way, the calcification process that tissues or biomaterials undergo inside the body. The aim of these systems is to mimic the high levels of calcium and phosphate present in the blood and measure the extent of the crystal's deposition. Different variations can include other parameters to increase the veracity of these models, such as flow, pressure, compliance and resistance. All the systems have different limitations that have to be acknowledged regarding the operating conditions and the degree of representation. The rational of using such is to partially replace in vivo animal testing, whilst rendering much more controllable and independent parameters compared to an animal model. The main use of these models is to study the calcification potential of prostheses that are in direct contact with the blood. In this category we find examples such as animal tissue prostheses (xenogeneic bioprosthesis). Xenogeneic heart valves are of special importance for this area of study as they demonstrate a limited durability mainly due to the fatigue of the tissue and the calcific deposits (see Aortic valve replacement). Description In vitro calcification models have been used in medical implant development to evaluate the calcification potential of the medical device or tissue. They can be considered a subfamily of the bioreactors that have been used in the field of tissue engineering for tissue culture and growth. These calcification bioreactors are designed to mimic and maintain the mechano-chemical environment that the tissue encounters in vivo with a view to generating the pathological environment that would favor calcium deposition. Parameters including medium flow, pH, temperature and supersaturation of the calcifying solution used in the bioreactor are maintained and closely monitored. The monitoring of these parameters allows to obtain information" https://en.wikipedia.org/wiki/Developer%20relations,"Developer relations, abbreviated as DevRel, is an umbrella term covering the strategies and tactics for building and nurturing a community of mutually beneficial relationships between organizations and developers (e.g., software developers) as the primary users, and often influencers on purchases, of a product. Developer Relations is a form of Platform Evangelism and the activities involved are sometimes referred to as a Developer Program or DevRel Program. A DevRel program may comprise a framework built around some or all of the following aspects: Developer Marketing: Outreach and engagement activities to create awareness and convert developers to use a product. Developer Education: Product documentation and education resources to aid learning and build affinity with a product and community. Developer Experience (DX): Resources like a developer portal, product, and documentation, to activate the developer with the least friction. Developer Success: Activities to nurture and retain developers as they build and scale with a product. Community: Nourishes a community to maintain a sustainable program. The impacts and goals of DevRel programs include: Increased revenue and funding User growth and retention Product innovation and improvements Customer satisfaction and support deflection Strong technical recruiting pipeline Brand recognition and awareness Other goals of DevRel initiatives can include: Product Building: An organization relies on a community of developers to build their technology (e.g., open source). Product-market Fit: The product's success depends on understanding developers' needs and desires. Developer Enablement: Supporting developers' use of the product (e.g., by providing education, tools, and infrastructure). Developer Perception: To overcome developer perceptions that may be preventing success of a product. Hiring/Recruiting: To attract potential developers for recruitment. History and roots Apple is considered to have crea" https://en.wikipedia.org/wiki/Autoregressive%20model,"In statistics, econometrics, and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, behavior, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable. Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root. Definition The notation indicates an autoregressive model of order p. The AR(p) model is defined as where are the parameters of the model, and is white noise. This can be equivalently written using the backshift operator B as so that, moving the summation term to the left side and using polynomial notation, we have An autoregressive model can thus be viewed as the output of an all-pole infinite impulse response filter whose input is white noise. Some parameter constraints are necessary for the model to remain weak-sense stationary. For example, processes in the AR(1) model with are not stationary. More generally, for an AR(p) model to be weak-sense stationary, the roots of the polynomial must lie outside the unit circle, i.e., each (complex) root must satisfy (see pages 89,92 ). Intertemporal effect of shocks In an AR process, a one-time" https://en.wikipedia.org/wiki/Cephalopod%20size,"Cephalopods, which include squids and octopuses, vary enormously in size. The smallest are only about long and weigh less than at maturity, while the giant squid can exceed in length and the colossal squid weighs close to half a tonne (), making them the largest living invertebrates. Living species range in mass more than three-billion-fold, or across nine orders of magnitude, from the lightest hatchlings to the heaviest adults. Certain cephalopod species are also noted for having individual body parts of exceptional size. Cephalopods were at one time the largest of all organisms on Earth, and numerous species of comparable size to the largest present day squids are known from the fossil record, including enormous examples of ammonoids, belemnoids, nautiloids, orthoceratoids, teuthids, and vampyromorphids. In terms of mass, the largest of all known cephalopods were likely the giant shelled ammonoids and endocerid nautiloids, though perhaps still second to the largest living cephalopods when considering tissue mass alone. Cephalopods vastly larger than either giant or colossal squids have been postulated at various times. One of these was the St. Augustine Monster, a large carcass weighing several tonnes that washed ashore on the United States coast near St. Augustine, Florida, in 1896. Reanalyses in 1995 and 2004 of the original tissue samples—together with those of other similar carcasses—showed conclusively that they were all masses of the collagenous matrix of whale blubber. Giant cephalopods have fascinated humankind for ages. The earliest surviving records are perhaps those of Aristotle and Pliny the Elder, both of whom described squids of very large size. Tales of giant squid have been common among mariners since ancient times, and may have inspired the monstrous kraken of Nordic legend, said to be as large as an island and capable of engulfing and sinking any ship. Similar tentacled sea monsters are known from other parts of the globe, including the Akk" https://en.wikipedia.org/wiki/Omega-categorical%20theory,"In mathematical logic, an omega-categorical theory is a theory that has exactly one countably infinite model up to isomorphism. Omega-categoricity is the special case κ =  = ω of κ-categoricity, and omega-categorical theories are also referred to as ω-categorical. The notion is most important for countable first-order theories. Equivalent conditions for omega-categoricity Many conditions on a theory are equivalent to the property of omega-categoricity. In 1959 Erwin Engeler, Czesław Ryll-Nardzewski and Lars Svenonius, proved several independently. Despite this, the literature still widely refers to the Ryll-Nardzewski theorem as a name for these conditions. The conditions included with the theorem vary between authors. Given a countable complete first-order theory T with infinite models, the following are equivalent: The theory T is omega-categorical. Every countable model of T has an oligomorphic automorphism group (that is, there are finitely many orbits on Mn for every n). Some countable model of T has an oligomorphic automorphism group. The theory T has a model which, for every natural number n, realizes only finitely many n-types, that is, the Stone space Sn(T) is finite. For every natural number n, T has only finitely many n-types. For every natural number n, every n-type is isolated. For every natural number n, up to equivalence modulo T there are only finitely many formulas with n free variables, in other words, for every n, the nth Lindenbaum–Tarski algebra of T is finite. Every model of T is atomic. Every countable model of T is atomic. The theory T has a countable atomic and saturated model. The theory T has a saturated prime model. Examples The theory of any countably infinite structure which is homogeneous over a finite relational language is omega-categorical. Hence, the following theories are omega-categorical: The theory of dense linear orders without endpoints (Cantor's isomorphism theorem) The theory of the Rado graph The theory o" https://en.wikipedia.org/wiki/Motzkin%E2%80%93Taussky%20theorem,"The Motzkin–Taussky theorem is a result from operator and matrix theory about the representation of a sum of two bounded, linear operators (resp. matrices). The theorem was proven by Theodore Motzkin and Olga Taussky-Todd. The theorem is used in perturbation theory, where e.g. operators of the form are examined. Statement Let be a finite-dimensional complex vector space. Furthermore, let be such that all linear combinations are diagonalizable for all . Then all eigenvalues of are of the form (i.e. they are linear in und ) and are independent of the choice of . Here stands for an eigenvalue of . Comments Motzkin and Taussky call the above property of the linearity of the eigenvalues in property L. Bibliography Kato, Tosio (1995). Perturbation Theory for Linear Operators. Berlin, Heidelberg: Springer. p. 86. ISBN 978-3-540-58661-6, doi:10.1007/978-3-642-66282-9.  Friedland, Shmuel (1981). A generalization of the Motzkin-Taussky theorem. Linear Algebra and its Applications. Vol. 36. pp. 103–109. doi:10.1016/0024-3795(81)90223-8. Notes Mathematical theorems Linear algebra Perturbation theory Linear operators" https://en.wikipedia.org/wiki/Estimation%20theory,"Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An estimator attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest The set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. Examples For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sample of voters. Alternatively, it is desired to estimate the probability of a voter voting for a particular candidate, based on some demographic features, such as age. Or, for example, in radar the aim is to find the range of objects (airplanes, boats, etc.) by analyzing the two-way transit timing of received echoes of transmitted pulses. Since the reflected pulses are unavoidably embedded in electrical noise, their measured values are randomly distributed, so that the transit time must be estimated. As another example, in electrical communication theory, the measurements which contain information regarding the parameters of interest are often associated with a noisy signal. Basics For a given model, several statistical ""ingredients"" are needed so the estimator can be implemented. The first is a statistical sample – a set of data points taken from a random vector (RV) of size N. Put into a vector, Secondly, there are M parameters whose values are to be estimated. Third, the continuous probability density function (pdf) or its" https://en.wikipedia.org/wiki/Alpha%20beta%20filter,"An alpha beta filter (also called alpha-beta filter, f-g filter or g-h filter) is a simplified form of observer for estimation, data smoothing and control applications. It is closely related to Kalman filters and to linear state observers used in control theory. Its principal advantage is that it does not require a detailed system model. Filter equations An alpha beta filter presumes that a system is adequately approximated by a model having two internal states, where the first state is obtained by integrating the value of the second state over time. Measured system output values correspond to observations of the first model state, plus disturbances. This very low order approximation is adequate for many simple systems, for example, mechanical systems where position is obtained as the time integral of velocity. Based on a mechanical system analogy, the two states can be called position x and velocity v. Assuming that velocity remains approximately constant over the small time interval ΔT between measurements, the position state is projected forward to predict its value at the next sampling time using equation 1. Since velocity variable v is presumed constant, its projected value at the next sampling time equals the current value. If additional information is known about how a driving function will change the v state during each time interval, equation 2 can be modified to include it. The output measurement is expected to deviate from the prediction because of noise and dynamic effects not included in the simplified dynamic model. This prediction error r is also called the residual or innovation, based on statistical or Kalman filtering interpretations Suppose that residual r is positive. This could result because the previous x estimate was low, the previous v was low, or some combination of the two. The alpha beta filter takes selected alpha and beta constants (from which the filter gets its name), uses alpha times the deviation r to correct the position estim" https://en.wikipedia.org/wiki/List%20of%20books%20on%20popular%20physics%20concepts,"This is a list of books which talk about things related to current day physics or physics as it would be in the future. There a number of books that have been penned about specific physics concepts, e.g. quantum mechanics or kinematics, and many other books which discuss physics in general, i.e. not focussing on a single topic. There are also books that encourage beginners to enjoy physics by making them look at it from different angles. Boks Lists of books" https://en.wikipedia.org/wiki/SwitchBlade,"SwitchBlade is the registered name of a family of layer 2 and layer 3 chassis switches developed by Allied Telesis. Current models include the SwitchBlade x908 GEN2 and the SwitchBlade x8100 layer 3 chassis switches. The first model was the SwitchBlade 4000-layer 3 core chassis, which ran the earlier AlliedWare operating system. AlliedWare Plus models The family includes models using the AlliedWare Plus operating system which uses an industry standard CLI structure. SwitchBlade x908 Generation 2 The SwitchBlade x908 GEN2 was introduced in 2017 and is the latest evolution of the original SwitchBlade x908 design. It features a stackable advanced layer 3 3RU chassis switch with 2.6 Terabit/s of switching capacity. It has eight switch module bays like its predecessor although in the GEN2 they are mounted vertically to assist with cooling and cable management. The GEN2 also supports Allied Telesis' Virtual Chassis Stacking technology, but this has been enhanced to enable up to 4 SwitchBlade x908 GEN2 chassis' to be stacked over long-distances using any port-speed (10G, 40G or 100G). Each chassis includes redundant system power supply bays. Available modules XEM2-12XT - 12x 1000BASE-T/10GBASE-T copper RJ-45 ports XEM2-12XTm - 12x 1000BASE-T/NBASE-T/10GBASE-T multi-gigabit copper RJ-45 ports XEM2-12XS - 12x 10G SFP ports XEM2-4QS - 4x 40G QSFP ports XEM2-1CQ - 1x 100G QSFP28 port SwitchBlade x8100 The SwitchBlade x8100 series was launched in 2012 is an advanced layer 3 chassis switch with 1.92Tbit/s of switching capacity when two SBx81CFC960 control cards are installed. It is available in two chassis sizes, 6-slot (SBx8106) and 12-slot (SBx8112). The 12-slot chassis has 10-line card slots and 2 controller card slots. The 6-slot chassis has 4-line card slots, 1 controller card slot, and one additional slot that can accommodate either a line card or controller card. It also features four hotswappable PSU bays, supporting load sharing and redundancy for both sys" https://en.wikipedia.org/wiki/List%20of%20mathematical%20constants,"A mathematical constant is a key number whose value is fixed by an unambiguous definition, often referred to by a symbol (e.g., an alphabet letter), or by mathematicians' names to facilitate using it across multiple mathematical problems. For example, the constant π may be defined as the ratio of the length of a circle's circumference to its diameter. The following list includes a decimal expansion and set containing each number, ordered by year of discovery. The column headings may be clicked to sort the table alphabetically, by decimal value, or by set. Explanations of the symbols in the right hand column can be found by clicking on them. List Mathematical constants sorted by their representations as continued fractions The following list includes the continued fractions of some constants and is sorted by their representations. Continued fractions with more than 20 known terms have been truncated, with an ellipsis to show that they continue. Rational numbers have two continued fractions; the version in this list is the shorter one. Decimal representations are rounded or padded to 10 places if the values are known. Sequences of constants See also Invariant (mathematics) Glossary of mathematical symbols List of mathematical symbols by subject List of numbers List of physical constants Particular values of the Riemann zeta function Physical constant Notes" https://en.wikipedia.org/wiki/Regularization%20%28physics%29,"In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a ""cutoff"", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that ""new physics"" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an ""effective theory"" within its intended scale of use. It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback. Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions. Overview Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, ), but the virtue of the regulator is that for its finite value, the result is finite. However, the result usually includes terms proportional to expressions like which are not well-defined in the limit . Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called renormalization. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as — are equal to the observed" https://en.wikipedia.org/wiki/Angle%20of%20arrival,"The angle of arrival (AoA) of a signal is the direction from which the signal (e.g. radio, optical or acoustic) is received. Measurement Measurement of AoA can be done by determining the direction of propagation of a radio-frequency wave incident on an antenna array or determined from maximum signal strength during antenna rotation. The AoA can be calculated by measuring the time difference of arrival (TDOA) between individual elements of the array. Generally this TDOA measurement is made by measuring the difference in received phase at each element in the antenna array. This can be thought of as beamforming in reverse. In beamforming, the signal from each element is weighed to ""steer"" the gain of the antenna array. In AoA, the delay of arrival at each element is measured directly and converted to an AoA measurement. Consider, for example, a two element array spaced apart by one-half the wavelength of an incoming RF wave. If a wave is incident upon the array at boresight, it will arrive at each antenna simultaneously. This will yield 0° phase-difference measured between the two antenna elements, equivalent to a 0° AoA. If a wave is incident upon the array at broadside, then a 180° phase difference will be measured between the elements, corresponding to a 90° AoA. In optics, AoA can be calculated using interferometry. Applications An application of AoA is in the geolocation of cell phones. The aim is either for the cell system to report the location of a cell phone placing an emergency call or to provide a service to tell the user of the cell phone where they are. Multiple receivers on a base station would calculate the AoA of the cell phone's signal, and this information would be combined to determine the phone's location. AoA is generally used to discover the location of pirate radio stations or of any military radio transmitter. In submarine acoustics, AoA is used to localize objects with active or passive ranging. Limitation Limitations on the acc" https://en.wikipedia.org/wiki/List%20of%20solids%20derived%20from%20the%20sphere,"This page lists solids derived from a sphere. Solids from cutting a sphere with one or more planes Dome Spherical cap Spherical sector Spherical segment Spherical shell Spherical wedge Solids from deforming a sphere Ellipsoid Spheroid Solid bounded by Morin surface Any Genus 0 surface Solids from intersecting a sphere with other solids or curved planes Reuleaux tetrahedron Spherical lens Notes Geometric shapes Mathematics-related lists" https://en.wikipedia.org/wiki/Sideloading,"Sideloading describes the process of transferring files between two local devices, in particular between a personal computer and a mobile device such as a mobile phone, smartphone, PDA, tablet, portable media player or e-reader. Sideloading typically refers to media file transfer to a mobile device via USB, Bluetooth, WiFi or by writing to a memory card for insertion into the mobile device, but also applies to the transfer of apps from web sources that are not vendor-approved. When referring to Android apps, ""sideloading"" typically means installing an application package in APK format onto an Android device. Such packages are usually downloaded from websites other than the official app store Google Play. For Android users sideloading of apps is only possible if the user has allowed ""Unknown Sources"" in their Security Settings. When referring to iOS apps, ""sideloading"" means installing an app in IPA format onto an Apple device, usually through the use of a computer program such as Cydia Impactor or Xcode. On modern versions of iOS, the sources of the apps must be trusted by both Apple and the user in ""profiles and device management"" in settings, except when using jailbreak methods of sideloading apps. Sideloading is only allowed by Apple for internal testing and development of apps using the official SDKs. Historical The term ""sideload"" was coined in the late 1990s by online storage service i-drive as an alternative means of transferring and storing computer files virtually instead of physically. In 2000, i-drive applied for a trademark on the term. Rather than initiating a traditional file ""download"" from a website or FTP site to their computer, a user could perform a ""sideload"" and have the file transferred directly into their personal storage area on the service. Usage of this feature began to decline as newer hard drives became cheaper and the space on them grew each year into the gigabytes and the trademark application was abandoned. The advent of portable " https://en.wikipedia.org/wiki/Luminex%20Corporation,"Luminex Corporation | A DiaSorin Company is a biotechnology company which develops, manufactures and markets proprietary biological testing technologies with applications in life-sciences. Background Luminex's Multi-Analyte Profiling (xMAP) technology allows simultaneous analysis of up to 500 bioassays from a small sample volume, typically a single drop of fluid, by reading biological tests on the surface of microscopic polystyrene beads called microspheres. The xMAP technology combines this miniaturized liquid array bioassay capability with small lasers, light emitting diodes (LEDs), digital signal processors, photo detectors, charge-coupled device imaging and proprietary software to create a system offering advantages in speed, precision, flexibility and cost. The technology is currently being used within various segments of the life sciences industry, which includes the fields of drug discovery and development, and for clinical diagnostics, genetic analysis, bio-defense, food safety and biomedical research. The Luminex MultiCode technology is used for real-time polymerase chain reaction (PCR) and multiplexed PCR assays. Luminex Corporation owns 315 issued patents worldwide, including over 124 issued patents in the United States based on its multiplexing xMAP platform." https://en.wikipedia.org/wiki/Host%20model,"In computer networking, a host model is an option of designing the TCP/IP stack of a networking operating system like Microsoft Windows or Linux. When a unicast packet arrives at a host, IP must determine whether the packet is locally destined (its destination matches an address that is assigned to an interface of the host). If the IP stack is implemented with a weak host model, it accepts any locally destined packet regardless of the network interface on which the packet was received. If the IP stack is implemented with a strong host model, it only accepts locally destined packets if the destination IP address in the packet matches an IP address assigned to the network interface on which the packet was received. The weak host model provides better network connectivity (for example, it can be easy to find any packet arriving at the host using ordinary tools), but it also makes hosts susceptible to multihome-based network attacks. For example, in some configurations when a system running a weak host model is connected to a VPN, other systems on the same subnet can compromise the security of the VPN connection. Systems running the strong host model are not susceptible to this type of attack. The IPv4 implementation in Microsoft Windows versions prior to Windows Vista uses the weak host model. The Windows Vista and Windows Server 2008 TCP/IP stack supports the strong host model for both IPv4 and IPv6 and is configured to use it by default. However, it can also be configured to use a weak host model. The IPv4 implementation in Linux defaults to the weak host model. Source validation by reversed path, as specified in RFC 1812 can be enabled (the rp_filter option), and some distributions do so by default. This is not quite the same as the strong host model, but defends against the same class of attacks for typical multihomed hosts. arp_ignore and arp_announce can also be used to tweak this behaviour. Modern BSDs (FreeBSD, NetBSD, OpenBSD, and DragonflyBSD) all defau" https://en.wikipedia.org/wiki/Typed%20assembly%20language,"In computer science, a typed assembly language (TAL) is an assembly language that is extended to include a method of annotating the datatype of each value that is manipulated by the code. These annotations can then be used by a program (type checker) that processes the assembly language code in order to analyse how it will behave when it is executed. Specifically, such a type checker can be used to prove the type safety of code that meets the criteria of some appropriate type system. Typed assembly languages usually include a high-level memory management system based on garbage collection. A typed assembly language with a suitably expressive type system can be used to enable the safe execution of untrusted code without using an intermediate representation like bytecode, allowing features similar to those currently provided by virtual machine environments like Java and .NET. See also Proof-carrying code Further reading Greg Morrisett. ""Typed assembly language"" in Advanced Topics in Types and Programming Languages. Editor: Benjamin C. Pierce. External links TALx86, a research project from Cornell University which has implemented a typed assembler for the Intel IA-32 architecture. Assembly languages Computer security Programming language theory" https://en.wikipedia.org/wiki/Field-programmable%20analog%20array,"A field-programmable analog array (FPAA) is an integrated circuit device containing computational analog blocks (CAB) and interconnects between these blocks offering field-programmability. Unlike their digital cousin, the FPGA, the devices tend to be more application driven than general purpose as they may be current mode or voltage mode devices. For voltage mode devices, each block usually contains an operational amplifier in combination with programmable configuration of passive components. The blocks can, for example, act as summers or integrators. FPAAs usually operate in one of two modes: continuous time and discrete time. Discrete-time devices possess a system sample clock. In a switched capacitor design, all blocks sample their input signals with a sample and hold circuit composed of a semiconductor switch and a capacitor. This feeds a programmable op amp section which can be routed to a number of other blocks. This design requires more complex semiconductor construction. An alternative, switched-current design, offers simpler construction and does not require the input capacitor, but can be less accurate, and has lower fan-out - it can drive only one following block. Both discrete-time device types must compensate for switching noise, aliasing at the system sample rate, and sample-rate limited bandwidth, during the design phase. Continuous-time devices work more like an array of transistors or op amps which can operate at their full bandwidth. The components are connected in a particular arrangement through a configurable array of switches. During circuit design, the switch matrix's parasitic inductance, capacitance and noise contributions must be taken into account. Currently there are very few manufactures of FPAAs. On-chip resources are still very limited when compared to that of an FPGA. This resource deficit is often cited by researchers as a limiting factor in their research. History The term FPAA was first used in 1991 by Lee and Gulak. Th" https://en.wikipedia.org/wiki/List%20of%20algorithm%20general%20topics,"This is a list of algorithm general topics. Analysis of algorithms Ant colony algorithm Approximation algorithm Best and worst cases Big O notation Combinatorial search Competitive analysis Computability theory Computational complexity theory Embarrassingly parallel problem Emergent algorithm Evolutionary algorithm Fast Fourier transform Genetic algorithm Graph exploration algorithm Heuristic Hill climbing Implementation Las Vegas algorithm Lock-free and wait-free algorithms Monte Carlo algorithm Numerical analysis Online algorithm Polynomial time approximation scheme Problem size Pseudorandom number generator Quantum algorithm Random-restart hill climbing Randomized algorithm Running time Sorting algorithm Search algorithm Stable algorithm (disambiguation) Super-recursive algorithm Tree search algorithm See also List of algorithms for specific algorithms List of computability and complexity topics for more abstract theory List of complexity classes, complexity class List of data structures. Mathematics-related lists" https://en.wikipedia.org/wiki/Component-based%20software%20engineering,"Component-based software engineering (CBSE), also called component-based development (CBD), is a style of software engineering that aims to build software out of loosely-coupled, modular components. It emphasizes the separation of concerns among different parts of a software system. Definition and characteristics of components An individual software component is a software package, a web service, a web resource, or a module that encapsulates a set of related functions or data. Components communicate with each other via interfaces. Each component provides an interface (called a provided interface) through which other components can use it. When a component uses another component's interface, that interface is called a used interface. In the UML illustrations in this article, provided interfaces are represented by lollipop-symbols, while used interfaces are represented by open socket symbols. Components must be substitutable, meaning that a component must be replaceable by another one having the same interfaces without breaking the rest of the system. Components should be reusable. Component-based usability testing should be considered when software components directly interact with users. Components should be: fully documented thoroughly tested robust - with comprehensive input-validity checking able to pass back appropriate error messages or return codes History The idea that software should be componentized - built from prefabricated components - first became prominent with Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968, titled Mass Produced Software Components. The conference set out to counter the so-called software crisis. McIlroy's subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea. Brad Cox of Stepstone largely defined the modern concept of a software component. He called them Software ICs and set out to crea" https://en.wikipedia.org/wiki/Mathematical%20folklore,"In common mathematical parlance, a mathematical result is called folklore if it is an unpublished result with no clear originator, but which is well-circulated and believed to be true among the specialists. More specifically, folk mathematics, or mathematical folklore, is the body of theorems, definitions, proofs, facts or techniques that circulate among mathematicians by word of mouth, but have not yet appeared in print, either in books or in scholarly journals. Quite important at times for researchers are folk theorems, which are results known, at least to experts in a field, and are considered to have established status, though not published in complete form. Sometimes, these are only alluded to in the public literature. An example is a book of exercises, described on the back cover: Another distinct category is well-knowable mathematics, a term introduced by John Conway. These mathematical matters are known and factual, but not in active circulation in relation with current research (i.e., untrendy). Both of these concepts are attempts to describe the actual context in which research work is done. Some people, in particular non-mathematicians, use the term folk mathematics to refer to the informal mathematics studied in many ethno-cultural studies of mathematics. Although the term ""mathematical folklore"" can also be used within the mathematics circle to describe the various aspects of their esoteric culture and practices (e.g., slang, proverb, limerick, joke). Stories, sayings and jokes Mathematical folklore can also refer to the unusual (and possibly apocryphal) stories or jokes involving mathematicians or mathematics that are told verbally in mathematics departments. Compilations include tales collected in G. H. Hardy's A Mathematician's Apology and ; examples include: Srinivasa Ramanujan's taxicab numbers Galileo dropping weights from the Leaning Tower of Pisa. An apple falling on Isaac Newton's head to inspire his theory of gravitation. The drinking, " https://en.wikipedia.org/wiki/Degree%20%28angle%29,"A degree (in full, a degree of arc, arc degree, or arcdegree), usually denoted by ° (the degree symbol), is a measurement of a plane angle in which one full rotation is 360 degrees. It is not an SI unit—the SI unit of angular measure is the radian—but it is mentioned in the SI brochure as an accepted unit. Because a full rotation equals 2 radians, one degree is equivalent to radians. History The original motivation for choosing the degree as a unit of rotations and angles is unknown. One theory states that it is related to the fact that 360 is approximately the number of days in a year. Ancient astronomers noticed that the sun, which follows through the ecliptic path over the course of the year, seems to advance in its path by approximately one degree each day. Some ancient calendars, such as the Persian calendar and the Babylonian calendar, used 360 days for a year. The use of a calendar with 360 days may be related to the use of sexagesimal numbers. Another theory is that the Babylonians subdivided the circle using the angle of an equilateral triangle as the basic unit, and further subdivided the latter into 60 parts following their sexagesimal numeric system. The earliest trigonometry, used by the Babylonian astronomers and their Greek successors, was based on chords of a circle. A chord of length equal to the radius made a natural base quantity. One sixtieth of this, using their standard sexagesimal divisions, was a degree. Aristarchus of Samos and Hipparchus seem to have been among the first Greek scientists to exploit Babylonian astronomical knowledge and techniques systematically. Timocharis, Aristarchus, Aristillus, Archimedes, and Hipparchus were the first Greeks known to divide the circle in 360 degrees of 60 arc minutes. Eratosthenes used a simpler sexagesimal system dividing a circle into 60 parts. Another motivation for choosing the number 360 may have been that it is readily divisible: 360 has 24 divisors, making it one of only 7 numbers such th" https://en.wikipedia.org/wiki/Conic%20constant,"In geometry, the conic constant (or Schwarzschild constant, after Karl Schwarzschild) is a quantity describing conic sections, and is represented by the letter K. The constant is given by where is the eccentricity of the conic section. The equation for a conic section with apex at the origin and tangent to the y axis is alternately where R is the radius of curvature at . This formulation is used in geometric optics to specify oblate elliptical (), spherical (), prolate elliptical (), parabolic (), and hyperbolic () lens and mirror surfaces. When the paraxial approximation is valid, the optical surface can be treated as a spherical surface with the same radius. Some non-optical design references use the letter p as the conic constant. In these cases, ." https://en.wikipedia.org/wiki/Outline%20of%20calculus,"Calculus is a branch of mathematics focused on limits, functions, derivatives, integrals, and infinite series. This subject constitutes a major part of contemporary mathematics education. Calculus has widespread applications in science, economics, and engineering and can solve many problems for which algebra alone is insufficient. Branches of calculus Differential calculus Integral calculus Multivariable calculus Fractional calculus Differential Geometry History of calculus History of calculus Important publications in calculus General calculus concepts Continuous function Derivative Fundamental theorem of calculus Integral Limit Non-standard analysis Partial derivative Infinite Series Calculus scholars Sir Isaac Newton Gottfried Leibniz Calculus lists List of calculus topics See also Glossary of calculus Table of mathematical symbols" https://en.wikipedia.org/wiki/Zero-order%20hold,"The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication. Time-domain model A zero-order hold reconstructs the following continuous-time waveform from a sample sequence x[n], assuming one sample per time interval T: where is the rectangular function. The function is depicted in Figure 1, and is the piecewise-constant signal depicted in Figure 2. Frequency-domain model The equation above for the output of the ZOH can also be modeled as the output of a linear time-invariant filter with impulse response equal to a rect function, and with input being a sequence of dirac impulses scaled to the sample values. The filter can then be analyzed in the frequency domain, for comparison with other reconstruction methods such as the Whittaker–Shannon interpolation formula suggested by the Nyquist–Shannon sampling theorem, or such as the first-order hold or linear interpolation between sample values. In this method, a sequence of Dirac impulses, xs(t), representing the discrete samples, x[n], is low-pass filtered to recover a continuous-time signal, x(t). Even though this is not what a DAC does in reality, the DAC output can be modeled by applying the hypothetical sequence of dirac impulses, xs(t), to a linear, time-invariant filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct constant pulse in the output. Begin by defining a continuous-time signal from the sample values, as above but using delta functions instead of rect functions: The scaling by , which arises naturally by time-scaling the delta function, has the result that the mean value of xs(t) is equal to the mean v" https://en.wikipedia.org/wiki/Metric%20tensor,"In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows defining distances and angles there. More precisely, a metric tensor at a point of is a bilinear form defined on the tangent space at (that is, a bilinear function that maps pairs of tangent vectors to real numbers), and a metric tensor on consists of a metric tensor at each point of that varies smoothly with . A metric tensor is positive-definite if for every nonzero vector . A manifold equipped with a positive-definite metric tensor is known as a Riemannian manifold. Such a metric tensor can be thought of as specifying infinitesimal distance on the manifold. On a Riemannian manifold , the length of a smooth curve between two points and can be defined by integration, and the distance between and can be defined as the infimum of the lengths of all such curves; this makes a metric space. Conversely, the metric tensor itself is the derivative of the distance function (taken in a suitable manner). While the notion of a metric tensor was known in some sense to mathematicians such as Gauss from the early 19th century, it was not until the early 20th century that its properties as a tensor were understood by, in particular, Gregorio Ricci-Curbastro and Tullio Levi-Civita, who first codified the notion of a tensor. The metric tensor is an example of a tensor field. The components of a metric tensor in a coordinate basis take on the form of a symmetric matrix whose entries transform covariantly under changes to the coordinate system. Thus a metric tensor is a covariant symmetric tensor. From the coordinate-independent point of view, a metric tensor field is defined to be a nondegenerate symmetric bilinear form on each tangent space that varies smoothly from point to point. Introduction Carl Friedrich Gauss i" https://en.wikipedia.org/wiki/Comstock%E2%80%93Needham%20system,"The Comstock–Needham system is a naming system for insect wing veins, devised by John Comstock and George Needham in 1898. It was an important step in showing the homology of all insect wings. This system was based on Needham's pretracheation theory that was later discredited by Frederic Charles Fraser in 1938. Vein terminology Longitudinal veins The Comstock and Needham system attributes different names to the veins on an insect's wing. From the anterior (leading) edge of the wing towards the posterior (rear), the major longitudinal veins are named: costa C, meaning rib subcosta Sc, meaning below the rib radius R, in analogy with a bone in the forearm, the radius media M, meaning middle cubitus Cu, meaning elbow anal veins A, in reference to its posterior location Apart from the costal and the anal veins, each vein can be branched, in which case the branches are numbered from anterior to posterior. For example, the two branches of the subcostal vein will be called Sc1 and Sc2. The radius typically branches once near the base, producing anteriorly the R1 and posteriorly the radial sector Rs. The radial sector may fork twice. The media may also fork twice, therefore having four branches reaching the wing margin. According to the Comstock–Needham system, the cubitus forks once, producing the cubital veins Cu1 and Cu2. According to some other authorities, Cu1 may fork again, producing the Cu1a and Cu1b. As there are several anal veins, they are called A1, A2, and so on. They are usually unforked. Crossveins Crossveins link the longitudinal veins, and are named accordingly (for example, the medio-cubital crossvein is termed m-cu). Some crossveins have their own name, like the humeral crossvein h and the sectoral crossvein s. Cell terminology The cells are named after the vein on the anterior side; for instance, the cell between Sc2 and R1 is called Sc2. In the case where two cells are separated by a crossvein but have the same anterior longitudinal vein, they" https://en.wikipedia.org/wiki/Location%20transparency,"In computer networks, location transparency is the use of names to identify network resources, rather than their actual location. For example, files are accessed by a unique file name, but the actual data is stored in physical sectors scattered around a disk in either the local computer or in a network. In a location transparency system, the actual location where the file is stored doesn't matter to the user. A distributed system will need to employ a networked scheme for naming resources. The main benefit of location transparency is that it no longer matters where the resource is located. Depending on how the network is set, the user may be able to obtain files that reside on another computer connected to the particular network. This means that the location of a resource doesn't matter to either the software developers or the end-users. This creates the illusion that the entire system is located in a single computer, which greatly simplifies software development. An additional benefit is the flexibility it provides. Systems resources can be moved to a different computer at any time without disrupting any software systems running on them. By simply updating the location that goes with the named resource, every program using that resource will be able to find it. Location transparency effectively makes the location easy to use for users, since the data can be accessed by almost everyone who can connect to the Internet, who knows the right file names for usage, and who has proper security credentials to access it. See also Transparency (computing)" https://en.wikipedia.org/wiki/Digital%20clock%20manager,"A digital clock manager (DCM) is an electronic component available on some field-programmable gate arrays (FPGAs) (notably ones produced by Xilinx). A digital clock manager is useful for manipulating clock signals inside the FPGA, and to avoid clock skew which would introduce errors in the circuit. Uses Digital clock managers have the following applications: Multiplying or dividing an incoming clock (which can come from outside the FPGA or from a Digital Frequency Synthesizer [DFS]). Making sure the clock has a steady duty cycle. Adding a phase shift with the additional use of a delay-locked loop. Eliminating clock skew within an FPGA design. See also Phase-locked loop" https://en.wikipedia.org/wiki/Mechatronics,"Mechatronics engineering, also called mechatronics, is an interdisciplinary branch of engineering that focuses on the integration of mechanical engineering, electrical engineering, electronic engineering and software engineering, and also includes a combination of robotics, computer science, telecommunications, systems, control, and product engineering. As technology advances over time, various subfields of engineering have succeeded in both adapting and multiplying. The intention of mechatronics is to produce a design solution that unifies each of these various subfields. Originally, the field of mechatronics was intended to be nothing more than a combination of mechanics, electrical and electronics, hence the name being a portmanteau of the words ""mechanics"" and ""electronics""; however, as the complexity of technical systems continued to evolve, the definition had been broadened to include more technical areas. The word mechatronics originated in Japanese-English and was created by Tetsuro Mori, an engineer of Yaskawa Electric Corporation. The word mechatronics was registered as trademark by the company in Japan with the registration number of ""46-32714"" in 1971. The company later released the right to use the word to the public, and the word began being used globally. Currently the word is translated into many languages and is considered an essential term for advanced automated industry. Many people treat mechatronics as a modern buzzword synonymous with automation, robotics and electromechanical engineering. French standard NF E 01-010 gives the following definition: ""approach aiming at the synergistic integration of mechanics, electronics, control theory, and computer science within product design and manufacturing, in order to improve and/or optimize its functionality"". History The word mechatronics was registered as trademark by the company in Japan with the registration number of ""46-32714"" in 1971. The company later released the right to use the word t" https://en.wikipedia.org/wiki/MsQuic,"MsQuic is a free and open source implementation of the IETF QUIC protocol written in C that is officially supported on the Microsoft Windows (including Server), Linux, and Xbox platforms. The project also provides libraries for macOS and Android, which are unsupported. It is designed to be a cross-platform general purpose QUIC library optimized for client and server applications benefitting from maximal throughput and minimal latency. By the end of 2021 the codebase had over 200,000 lines of production code, with 50,000 lines of ""core"" code, sharable across platforms. The source code is licensed under MIT License and available on GitHub. Among its features are, in part, support for asynchronous IO, receive-side scaling (RSS), UDP send and receive coalescing, and connection migrations that persist connections between client and server to overcome client IP or port changes, such as when moving throughout mobile networks. Both the HTTP/3 and SMB stacks of Microsoft Windows leverage MsQuic, with msquic.sys providing kernel-mode functionality. Being dependent upon Schannel for TLS 1.3, kernel mode therefore does not support 0-RTT. User-mode programs can implement MsQuic, with support 0-RTT, through msquic.dll, which can be built from source code or downloaded as a shared library through binary releases on the repository. Its support for the Microsoft Game Development Kit makes MsQuic possible on both Xbox and Windows. See also Transmission Control Protocol User Datagram Protocol HTTP/2 XDP for Windows" https://en.wikipedia.org/wiki/Chiplet,"A chiplet is a tiny integrated circuit (IC) that contains a well-defined subset of functionality. It is designed to be combined with other chiplets on an interposer in a single package. A set of chiplets can be implemented in a mix-and-match ""Lego-like"" assembly. This provides several advantages over a traditional system on chip: Reusable IP (intellectual property): the same chiplet can be used in many different devices Heterogeneous integration: chiplets can be fabricated with different processes, materials, and nodes, each optimized for its particular function Known good die: chiplets can be tested before assembly, improving the yield of the final device Multiple chiplets working together in a single integrated circuit may be called a multi-chip module, hybrid IC, 2.5D IC, or an advanced package. Chiplets may be connected with standards such as UCIe, bunch of wires (BoW), OpenHBI, and OIF XSR. The term was coined by University of California, Berkeley professor John Wawrzynek as a component of the RAMP Project (research accelerator for multiple processors) in 2006 extension for the Department of Energy, as was RISC-V architecture." https://en.wikipedia.org/wiki/Time-driven%20priority,"Time-driven priority (TDP) is a synchronous packet scheduling technique that implements UTC-based pipeline forwarding and can be combined with conventional IP routing to achieve the higher flexibility than another pipeline forwarding implementation known as time-driven switching (TDS) or fractional lambda switching (FλS). Packets entering a switch from the same input port during the same [time frame] (TF) can be sent out from different output ports, according to the rules that drive IP packet routing. Operation in accordance to pipeline forwarding principles ensures deterministic quality of service and low complexity packet scheduling. Specifically, packets scheduled for transmission during a TF are given maximum priority; if resources have been properly reserved, all scheduled packets will be at the output port and transmitted before their TF ends. Various aspects of the technology are covered by several patents issued by both the United States Patent and Trademark Office and the European Patent Office." https://en.wikipedia.org/wiki/Hyperpalatable%20food,"Hyperpalatable food (HPF) combines high levels of fat, sugar, sodium, or carbohydrates to trigger the brain's reward system, encouraging excessive eating. The concept of hyperpalatability is foundational to ultra-processed foods, which are usually engineered to have enjoyable qualities of sweetness, saltiness, or richness. Hyperpalatable foods can stimulate the release of metabolic, stress, and appetite hormones that play a role in cravings and may interfere with the body's ability to regulate appetite and satiety. Definition Researchers have proposed specific criteria for hyperpalatability based on the percentage of calories from fat, sugar, and salt in a food item. A team at the University of Kansas analysed databases from the United States Department of Agriculture to identify the most common descriptive definitions for hyperpalatable foods. They found three combinations that most frequently defined hyperpalatable foods: Foods with more than 25 per cent of calories from fat plus more than 0.30 per cent sodium by weight (often including bacon, cheese, and salami). Foods with more than 20 per cent of calories from fat and more than 20 per cent of calories from simple sugars (typically cake, ice cream, chocolate). Foods with more than 40 per cent of calories from carbohydrates and more than 0.20 per cent sodium by weight (many brands of pretzels, popcorn, and crackers). The proportion of foods sold in the United States fitting this definition of hyperpalatable increased by twenty per cent between 1988 and 2018. Neurobiology Hyperpalatable foods have been shown to activate the reward regions of the brain, such as the hypothalamus, that influence food choices and eating behaviours. When these foods are consumed, the neurons in the reward region become very active, creating highly positive feelings of pleasure so that people want to keep seeking these foods regularly. Hyperpalatable foods can also modify the release of hormones that regulate appetite, stress, " https://en.wikipedia.org/wiki/Network%20eavesdropping,"Network eavesdropping, also known as eavesdropping attack, sniffing attack, or snooping attack, is a method that retrieves user information through the internet. This attack happens on electronic devices like computers and smartphones. This network attack typically happens under the usage of unsecured networks, such as public wifi connections or shared electronic devices. Eavesdropping attacks through the network is considered one of the most urgent threats in industries that rely on collecting and storing data. Internet users use eavesdropping via the Internet to improve information security. A typical network eavesdropper may be called a Black-hat hacker and is considered a low-level hacker as it is simple to network eavesdrop successfully. The threat of network eavesdroppers is a growing concern. Research and discussions are brought up in the public's eye, for instance, types of eavesdropping, open-source tools, and commercial tools to prevent eavesdropping. Models against network eavesdropping attempts are built and developed as privacy is increasingly valued. Sections on cases of successful network eavesdropping attempts and its laws and policies in the National Security Agency are mentioned. Some laws include the Electronic Communications Privacy Act and the Foreign Intelligence Surveillance Act. Types of attacks Types of network eavesdropping include intervening in the process of decryption of messages on communication systems, attempting to access documents stored in a network system, and listening on electronic devices. Types include electronic performance monitoring and control systems, keystroke logging, man-in-the-middle attacks, observing exit nodes on a network, and Skype & Type. Electronic performance monitoring and control systems (EPMCSs) Electronic performance monitoring and control systems are used by employees or companies and organizations to collect, store, analyze, and report actions or performances of employers when they are working. Th" https://en.wikipedia.org/wiki/Global%20network,"A global network is any communication network which spans the entire Earth. The term, as used in this article refers in a more restricted way to bidirectional communication networks, and to technology-based networks. Early networks such as international mail and unidirectional communication networks, such as radio and television, are described elsewhere. The first global network was established using electrical telegraphy and global span was achieved in 1899. The telephony network was the second to achieve global status, in the 1950s. More recently, interconnected IP networks (principally the Internet, with estimated 2.5 billion users worldwide in 2014 ), and the GSM mobile communication network (with over 6 billion worldwide users in 2014) form the largest global networks of all. Setting up global networks requires immensely costly and lengthy efforts lasting for decades. Elaborate interconnections, switching and routing devices, laying out physical carriers of information, such as land and submarine cables and earth stations must be set in operation. In addition, international communication protocols, legislation and agreements are involved. Global networks might also refer to networks of individuals (such as scientists), communities (such as cities) and organizations (such as civil organizations) worldwide which, for instance, might have formed for the management, mitigation and resolval of global issues. Satellite global networks Communication satellites are an important part of global networks. However, there are specific low Earth orbit (LEO) global satellite constellations, such as Iridium, Globalstar and Orbcomm, which are comprised by dozens of similar satellites which are put in orbit at regularly spaced positions and form a mesh network, sometimes sending and receiving information directly among themselves. Using VSAT technology, satellite internet access has become possible. Mobile wireless networks It is estimated that 80% of the global mobile ma" https://en.wikipedia.org/wiki/Idiobiology,"Idiobiology is a branch of biology which studies individual organisms, or the study of organisms as individuals." https://en.wikipedia.org/wiki/Integrated%20circuit%20design,"Integrated circuit design, or IC design, is a sub-field of electronics engineering, encompassing the particular logic and circuit design techniques required to design integrated circuits, or ICs. ICs consist of miniaturized electronic components built into an electrical network on a monolithic semiconductor substrate by photolithography. IC design can be divided into the broad categories of digital and analog IC design. Digital IC design is to produce components such as microprocessors, FPGAs, memories (RAM, ROM, and flash) and digital ASICs. Digital design focuses on logical correctness, maximizing circuit density, and placing circuits so that clock and timing signals are routed efficiently. Analog IC design also has specializations in power IC design and RF IC design. Analog IC design is used in the design of op-amps, linear regulators, phase locked loops, oscillators and active filters. Analog design is more concerned with the physics of the semiconductor devices such as gain, matching, power dissipation, and resistance. Fidelity of analog signal amplification and filtering is usually critical, and as a result analog ICs use larger area active devices than digital designs and are usually less dense in circuitry. Modern ICs are enormously complicated. An average desktop computer chip, as of 2015, has over 1 billion transistors. The rules for what can and cannot be manufactured are also extremely complex. Common IC processes of 2015 have more than 500 rules. Furthermore, since the manufacturing process itself is not completely predictable, designers must account for its statistical nature. The complexity of modern IC design, as well as market pressure to produce designs rapidly, has led to the extensive use of automated design tools in the IC design process. In short, the design of an IC using EDA software is the design, test, and verification of the instructions that the IC is to carry out. Fundamentals Integrated circuit design involves the creation of ele" https://en.wikipedia.org/wiki/Campenot%20chamber,"A Campenot chamber is a three-chamber petri dish culture system devised by Robert Campenot to study neurons. Commonly used in neurobiology, the neuron soma or cell body is physically compartmentalized from its axons allowing for spatial segregation during investigation. This separation, typically done with a fluid impermeable barrier, can be used to study nerve growth factors (NGF). Neurons are particularly sensitive to environmental cues such as temperature, pH, and oxygen concentration which can affect their behavior. The Campenot chamber can be used to study spatial and temporal axon guidance in both healthy controls and in cases of neuronal injury or neurodegeneration. Campenot concluded that neuron survival and growth depend on local nerve growth factors. Structure The Campenot chamber is made up of three chambers divided by Teflon fibers. These fibers are added to a petri dish coated in collagen with 20 scratches, spaced 200 μm apart, that become the parallel tracks for axons to grow. There is also a layer of grease that works to seal the Teflon to the neuron and separates the axon processes from the cell body. Refer to Side View of Campenot Chamber figure. History of use The uniqueness of the design allows for biochemical analysis and application of a stimulus at either distal or proximal ends. Campenot chambers have been used for a variety of studies including culturing of iPSC-derived motor neurons to isolate axonal RNA which can then be used for molecular analysis,,. The chamber has also been modified to study degeneration and apoptosis of cultured hippocampal neurons induced by amyloid beta. A modified 2-chamber system was used to examine the axonal transport of herpes simplex virus by examining the transmission of the virus from axon to epidermal cells. Through this study, the virus was found to undergo a specialized mode of viral transport, assembly and sensory neuron egress. Recent techniques in lithography have made these chambers a more appea" https://en.wikipedia.org/wiki/Newton%27s%20theorem%20of%20revolving%20orbits,"In classical mechanics, Newton's theorem of revolving orbits identifies the type of central force needed to multiply the angular speed of a particle by a factor k without affecting its radial motion (Figures 1 and 2). Newton applied his theorem to understanding the overall rotation of orbits (apsidal precession, Figure 3) that is observed for the Moon and planets. The term ""radial motion"" signifies the motion towards or away from the center of force, whereas the angular motion is perpendicular to the radial motion. Isaac Newton derived this theorem in Propositions 43–45 of Book I of his Philosophiæ Naturalis Principia Mathematica, first published in 1687. In Proposition 43, he showed that the added force must be a central force, one whose magnitude depends only upon the distance r between the particle and a point fixed in space (the center). In Proposition 44, he derived a formula for the force, showing that it was an inverse-cube force, one that varies as the inverse cube of r. In Proposition 45 Newton extended his theorem to arbitrary central forces by assuming that the particle moved in nearly circular orbit. As noted by astrophysicist Subrahmanyan Chandrasekhar in his 1995 commentary on Newton's Principia, this theorem remained largely unknown and undeveloped for over three centuries. Since 1997, the theorem has been studied by Donald Lynden-Bell and collaborators. Its first exact extension came in 2000 with the work of Mahomed and Vawda. Historical context The motion of astronomical bodies has been studied systematically for thousands of years. The stars were observed to rotate uniformly, always maintaining the same relative positions to one another. However, other bodies were observed to wander against the background of the fixed stars; most such bodies were called planets after the Greek word ""πλανήτοι"" (planētoi) for ""wanderers"". Although they generally move in the same direction along a path across the sky (the ecliptic), individual planets sometimes re" https://en.wikipedia.org/wiki/Network%20configuration%20and%20change%20management,"Network configuration and change management (NCCM) is a discipline in information technology. Organizations are using NCCM as a way to: automate changes; reduce network downtime; network device configuration backup & restore; meet compliance. See also Change Management (ITSM) Computer networking Information technology management Computer networking" https://en.wikipedia.org/wiki/Multipacket%20reception,"In networking, multipacket reception refers to the capability of networking nodes for decoding/demodulating signals from a number of source nodes concurrently. In wireless communications, Multipacket reception is achieved using physical layer technologies like orthogonal CDMA, MIMO and space–time codes. See also MIMO – Wireless communication systems having multiple antennas at both transmitter and receiver. CDMA – Code division multiple access External links http://acronyms.thefreedictionary.com/MPR Computer networking" https://en.wikipedia.org/wiki/Interspecific%20competition,"Interspecific competition, in ecology, is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism, a type of symbiosis. Competition between members of the same species is called intraspecific competition. If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey, and can be negatively impacted by the presence of the other because they will have less food. Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition. Types All of the types described here can also apply to intraspecific competition, that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric). Based on mechanism Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availab" https://en.wikipedia.org/wiki/Constant-resistance%20network,"A constant-resistance network in electrical engineering is a network whose input resistance does not change with frequency when correctly terminated. Examples of constant resistance networks include: Zobel network Lattice phase equaliser Boucherot cell Bridged T delay equaliser Electrical engineering Physics-related lists" https://en.wikipedia.org/wiki/Hilbert%20transform,"In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, of a real variable and produces another function of a real variable . The Hilbert transform is given by the Cauchy principal value of the convolution with the function (see ). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see ). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal . The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions. Definition The Hilbert transform of can be thought of as the convolution of with the function , known as the Cauchy kernel. Because 1/ is not integrable across , the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by ). Explicitly, the Hilbert transform of a function (or signal) is given by provided this integral exists as a principal value. This is precisely the convolution of with the tempered distribution . Alternatively, by changing variables, the principal-value integral can be written explicitly as When the Hilbert transform is applied twice in succession to a function , the result is provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is . This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of (see below). For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if is analytic in the upp" https://en.wikipedia.org/wiki/Bernstein%27s%20constant,"Bernstein's constant, usually denoted by the Greek letter β (beta), is a mathematical constant named after Sergei Natanovich Bernstein and is equal to 0.2801694990... . Definition Let En(ƒ) be the error of the best uniform approximation to a real function ƒ(x) on the interval [−1, 1] by real polynomials of no more than degree n. In the case of ƒ(x) = |x|, Bernstein showed that the limit called Bernstein's constant, exists and is between 0.278 and 0.286. His conjecture that the limit is: was disproven by Varga and Carpenter, who calculated" https://en.wikipedia.org/wiki/Proxy%20list,"A proxy list is a list of open HTTP/HTTPS/SOCKS proxy servers all on one website. Proxies allow users to make indirect network connections to other computer network services. Proxy lists include the IP addresses of computers hosting open proxy servers, meaning that these proxy servers are available to anyone on the internet. Proxy lists are often organized by the various proxy protocols the servers use. Many proxy lists index, which can be used without changing browser settings. Proxy Anonymity Levels Elite proxies - Such proxies do not change request fields and look like a real browser, and your real IP address is hidden. Server administrators will commonly be fooled into believing that you are not using a proxy. Anonymous proxies - These proxies do not show a real IP address, however, they do change the request fields, therefore it is very easy to detect that a proxy is being used by log analysis. You are still anonymous, but some server administrators may restrict proxy requests. Transparent proxies - (not anonymous, simply HTTP) - These change the request fields and they transfer the real IP. Such proxies are not applicable for security or privacy uses while surfing the web, and should only be used for network speed improvement. SOCKS is a protocol that relays TCP sessions through a firewall host to allow application users transparent access across the firewall. Because the protocol is independent of application protocols, it can be (and has been) used for many different services, such as telnet, FTP, finger, whois, gopher, WWW, etc. Access control can be applied at the beginning of each TCP session; thereafter the server simply relays the data between the client and the application server, incurring minimum processing overhead. Since SOCKS never has to know anything about the application protocol, it should also be easy for it to accommodate applications that use encryption to protect their traffic from nosy snoopers. No information about the client is se" https://en.wikipedia.org/wiki/Biorisk,"Biorisk generally refers to the risk associated with biological materials and/or infectious agents, also known as pathogens. The term has been used frequently for various purposes since the early 1990s. The term is used by regulators, security experts, laboratory personnel and industry alike, and is used by the World Health Organization (WHO). WHO/Europe also provides tools and training courses in biosafety and biosecurity. An international Laboratory Biorisk Management Standard developed under the auspices of the European Committee for Standardization, defines biorisk as the combination of the probability of occurrence of harm and the severity of that harm where the source of harm is a biological agent or toxin. The source of harm may be an unintentional exposure, accidental release or loss, theft, misuse, diversion, unauthorized access or intentional unauthorized release. Biorisk reduction Biorisk reduction involves creating expertise in managing high-consequence pathogens, by providing training on safe handling and control of pathogens that pose significant health risks. See also Biocontainment, related to laboratory biosafety levels Biodefense Biodiversity Biohazard Biological warfare Biological Weapons Convention Biosecurity Bioterrorism Cyberbiosecurity Endangered species" https://en.wikipedia.org/wiki/Earliest%20known%20life%20forms,"The earliest known life forms on Earth are believed to be fossilized microorganisms found in hydrothermal vent precipitates, considered to be about 3.42 billion years old. The earliest time for the origin of life on Earth is at least 3.77 billion years ago, possibly as early as 4.28 billion years ago — not long after the oceans formed 4.5 billion years ago, and after the formation of the Earth 4.54 billion years ago. The earliest direct evidence of life on Earth is from microfossils of microorganisms permineralized in 3.465-billion-year-old Australian Apex chert rocks, although the validity of these microfossils is debated. Biospheres Earth remains the only place in the universe known to harbor life. The origin of life on Earth was at least 3.77 billion years ago, possibly as early as 4.28 billion years ago. The Earth's biosphere extends down to at least below the surface, and up to at least into the atmosphere, and includes soil, hydrothermal vents, and rock. Further, the biosphere has been found to extend at least below the ice of Antarctica, and includes the deepest parts of the ocean, down to rocks kilometers below the sea floor. In July 2020, marine biologists reported that aerobic microorganisms (mainly), in ""quasi-suspended animation"", were found in organically-poor sediments, up to 101.5 million years old, below the seafloor in the South Pacific Gyre (SPG) (""the deadest spot in the ocean""), and could be the longest-living life forms ever found. Under certain test conditions, life forms have been observed to survive in the vacuum of outer space. More recently, in August 2020, bacteria were found to survive for three years in outer space, according to studies conducted on the International Space Station. In February 2023, findings of a ""dark microbiome"" of unfamiliar microorganisms in the Atacama Desert in Chile, a Mars-like region of planet Earth, were reported. The total mass of the biosphere has been estimated to be as much as 4 trillion tons of carb" https://en.wikipedia.org/wiki/Competition%20%28biology%29,"Competition is an interaction between organisms or species in which both require a resource that is in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other. In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time). There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as ""real"" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition. According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis. Interference competition During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites. " https://en.wikipedia.org/wiki/Bin%20picking,"Bin picking (also referred to as random bin picking) is a core problem in computer vision and robotics. The goal is to have a robot with sensors and cameras attached to it pick-up known objects with random poses out of a bin using a suction gripper, parallel gripper, or other kind of robot end effector. Early work on bin picking made use of Photometric Stereo in recovering the shapes of objects and to determine their orientation in space. Amazon previously held a competition focused on bin picking referred to as the ""Amazon Picking Challenge"", which was held from 2015 to 2017. The challenge tasked entrants with building their own robot hardware and software that could attempt simplified versions of the general task of picking and stowing items on shelves. The robots were scored by how many items were picked and stowed in a fixed amount of time. The first Amazon Robotics challenge was won by a team from TU Berlin in 2015, followed by a team from TU Delft and the Dutch company ""Fizyr"" in 2016. The last Amazon Robotics Challenge was won by the Australian Centre for Robotic Vision at Queensland University of Technology with their robot named Cartman. The Amazon Robotics/Picking Challenge was discontinued following the 2017 competition. Although there can be some overlap, bin picking is distinct from ""each picking"" and the bin packing problem. See also 3D pose estimation Bowl feeder" https://en.wikipedia.org/wiki/Dynamic%20circuit%20network,"A dynamic circuit network (DCN) is an advanced computer networking technology that combines traditional packet-switched communication based on the Internet Protocol, as used in the Internet, with circuit-switched technologies that are characteristic of traditional telephone network systems. This combination allows user-initiated ad hoc dedicated allocation of network bandwidth for high-demand, real-time applications and network services, delivered over an optical fiber infrastructure. Implementation Dynamic circuit networks were pioneered by the Internet2 advanced networking consortium. The experimental Internet2 HOPI infrastructure, decommissioned in 2007, was a forerunner to the current SONET-based Ciena Network underlying the Internet2 DCN. The Internet2 DCN began operation in late 2007 as part of the larger Internet2 network. It provides advanced networking capabilities and resources to the scientific and research communities, such as the Large Hadron Collider (LHC) project. The Internet2 DCN is based on open-source, standards-based software, the Inter-domain Controller (IDC) protocol, developed in cooperation with ESnet and GÉANT2. The entire software set is known as the Dynamic Circuit Network Software Suite (DCN SS). Inter-domain Controller protocol The Inter-domain Controller protocol manages the dynamic provisioning of network resources participating in a dynamic circuit network across multiple administrative domain boundaries. It is a SOAP-based XML messaging protocol, secured by Web Services Security (v1.1) using the XML Digital Signature standard. It is transported over HTTP Secure (HTTPS) connections. See also Internet Protocol Suite IPv6 Fiber-optic communication" https://en.wikipedia.org/wiki/Biocontainment%20of%20genetically%20modified%20organisms,"Since the advent of genetic engineering in the 1970s, concerns have been raised about the dangers of the technology. Laws, regulations, and treaties were created in the years following to contain genetically modified organisms and prevent their escape. Nevertheless, there are several examples of failure to keep GM crops separate from conventional ones. Overview In the context of agriculture and food and feed production, co-existence means using cropping systems with and without genetically modified crops in parallel. In some countries, such as the United States, co-existence is not governed by any single law but instead is managed by regulatory agencies and tort law. In other regions, such as Europe, regulations require that the separation and the identity of the respective food and feed products must be maintained at all stages of the production process. Many consumers are critical of genetically modified plants and their products, while, conversely, most experts in charge of GMO approvals do not perceive concrete threats to health or the environment. The compromise chosen by some countries - notably the European Union - has been to implement regulations specifically governing co-existence and traceability. Traceability has become commonplace in the food and feed supply chains of most countries in the world, but the traceability of GMOs is made more challenging by the addition of very strict legal thresholds for unwanted mixing. Within the European Union, since 2001, conventional and organic food and feedstuffs can contain up to 0.9% of authorised GM material without being labelled GM (any trace of non-authorised GM products would cause shipments to be rejected). In the United States there is no legislation governing the co-existence of neighboring farms growing organic and GM crops; instead the US relies on a ""complex but relaxed"" combination of three federal agencies (FDA, EPA, and USDA/APHIS) and the common law tort system, governed by state law, to ma" https://en.wikipedia.org/wiki/Pulse%20shaping,"In electronics and telecommunications, pulse shaping is the process of changing a transmitted pulses' waveform to optimize the signal for its intended purpose or the communication channel. This is often done by limiting the bandwidth of the transmission and filtering the pulses to control intersymbol interference. Pulse shaping is particularly important in RF communication for fitting the signal within a certain frequency band and is typically applied after line coding and modulation. Need for pulse shaping Transmitting a signal at high modulation rate through a band-limited channel can create intersymbol interference. The reason for this are Fourier correspondences (see Fourier transform). A bandlimited signal corresponds to an infinite time signal, that causes neighbouring pulses to overlap. As the modulation rate increases, the signal's bandwidth increases. As soon as the spectrum of the signal is a sharp rectangular, this leads to a sinc shape in the time domain. This happens if the bandwidth of the signal is larger than the channel bandwidth, leading to a distortion. This distortion usually manifests itself as intersymbol interference (ISI). Theoretically for sinc shaped pulses, there is no ISI, if neighbouring pulses are perfectly aligned, i.e. in the zero crossings of each other. But this requires a very good synchronization and precise/stable sampling without jitters. As a practical tool to determine ISI, one uses the Eye pattern, that visualizes typical effects of the channel and the synchronization/frequency stability. The signal's spectrum is determined by the modulation scheme and data rate used by the transmitter, but can be modified with a pulse shaping filter. This pulse shaping will make the spectrum smooth, leading to a time limited signal again. Usually the transmitted symbols are represented as a time sequence of dirac delta pulses multiplied with the symbol. This is the formal transition from the digital to the analog domain. At this point, th" https://en.wikipedia.org/wiki/Lieb%27s%20square%20ice%20constant,"Lieb's square ice constant is a mathematical constant used in the field of combinatorics to quantify the number of Eulerian orientations of grid graphs. It was introduced by Elliott H. Lieb in 1967. Definition An n × n grid graph (with periodic boundary conditions and n ≥ 2) has n2 vertices and 2n2 edges; it is 4-regular, meaning that each vertex has exactly four neighbors. An orientation of this graph is an assignment of a direction to each edge; it is an Eulerian orientation if it gives each vertex exactly two incoming edges and exactly two outgoing edges. Denote the number of Eulerian orientations of this graph by f(n). Then is Lieb's square ice constant. Lieb used a transfer-matrix method to compute this exactly. The function f(n) also counts the number of 3-colorings of grid graphs, the number of nowhere-zero 3-flows in 4-regular graphs, and the number of local flat foldings of the Miura fold. Some historical and physical background can be found in the article Ice-type model. See also Spin ice Ice-type model" https://en.wikipedia.org/wiki/Reciprocity%20%28network%20science%29,"In network science, reciprocity is a measure of the likelihood of vertices in a directed network to be mutually linked. Like the clustering coefficient, scale-free degree distribution, or community structure, reciprocity is a quantitative measure used to study complex networks. Motivation In real network problems, people are interested in determining the likelihood of occurring double links (with opposite directions) between vertex pairs. This problem is fundamental for several reasons. First, in the networks that transport information or material (such as email networks, World Wide Web (WWW), World Trade Web, or Wikipedia ), mutual links facilitate the transportation process. Second, when analyzing directed networks, people often treat them as undirected ones for simplicity; therefore, the information obtained from reciprocity studies helps to estimate the error introduced when a directed network is treated as undirected (for example, when measuring the clustering coefficient). Finally, detecting nontrivial patterns of reciprocity can reveal possible mechanisms and organizing principles that shape the observed network's topology. Definitions Traditional definition A traditional way to define the reciprocity r is using the ratio of the number of links pointing in both directions to the total number of links L With this definition, is for a purely bidirectional network while for a purely unidirectional one. Real networks have an intermediate value between 0 and 1. However, this definition of reciprocity has some defects. It cannot tell the relative difference of reciprocity compared with purely random network with the same number of vertices and edges. The useful information from reciprocity is not the value itself, but whether mutual links occur more or less often than expected by chance. Besides, in those networks containing self-linking loops (links starting and ending at the same vertex), the self-linking loops should be excluded when calculating L. Ga" https://en.wikipedia.org/wiki/QUIC,"QUIC (pronounced ""quick"") is a general-purpose transport layer network protocol initially designed by Jim Roskind at Google, implemented, and deployed in 2012, announced publicly in 2013 as experimentation broadened, and described at an IETF meeting. QUIC is used by more than half of all connections from the Chrome web browser to Google's servers. Microsoft Edge (a derivative of the open-source Chromium browser) and Firefox support it. Safari implements the protocol, however it is not enabled by default. Although its name was initially proposed as the acronym for ""Quick UDP Internet Connections"", IETF's use of the word QUIC is not an acronym; it is simply the name of the protocol. QUIC improves performance of connection-oriented web applications that are currently using TCP. It does this by establishing a number of multiplexed connections between two endpoints using User Datagram Protocol (UDP), and is designed to obsolete TCP at the transport layer for many applications, thus earning the protocol the occasional nickname ""TCP/2"". QUIC works hand-in-hand with HTTP/2's multiplexed connections, allowing multiple streams of data to reach all the endpoints independently, and hence independent of packet losses involving other streams. In contrast, HTTP/2 hosted on Transmission Control Protocol (TCP) can suffer head-of-line-blocking delays of all multiplexed streams if any of the TCP packets is delayed or lost. QUIC's secondary goals include reduced connection and transport latency, and bandwidth estimation in each direction to avoid congestion. It also moves congestion control algorithms into the user space at both endpoints, rather than the kernel space, which it is claimed will allow these algorithms to improve more rapidly. Additionally, the protocol can be extended with forward error correction (FEC) to further improve performance when errors are expected, and this is seen as the next step in the protocol's evolution. It has been designed to avoid protocol ossifica" https://en.wikipedia.org/wiki/Autocorrelation,"Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. Auto-correlation of stochastic processes In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined. Subtracting the mean before multiplication yields the auto-covariance function between times and : Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant " https://en.wikipedia.org/wiki/Uniqueness%20quantification,"In mathematics and logic, the term ""uniqueness"" refers to the property of being the one and only object satisfying a certain condition. This sort of quantification is known as uniqueness quantification or unique existential quantification, and is often denoted with the symbols ""∃!"" or ""∃=1"". For example, the formal statement may be read as ""there is exactly one natural number such that "". Proving uniqueness The most common technique to prove the unique existence of a certain object is to first prove the existence of the entity with the desired condition, and then to prove that any two such entities (say, and ) must be equal to each other (i.e. ). For example, to show that the equation has exactly one solution, one would first start by establishing that at least one solution exists, namely 3; the proof of this part is simply the verification that the equation below holds: To establish the uniqueness of the solution, one would then proceed by assuming that there are two solutions, namely and , satisfying . That is, Then since equality is a transitive relation, Subtracting 2 from both sides then yields which completes the proof that 3 is the unique solution of . In general, both existence (there exists at least one object) and uniqueness (there exists at most one object) must be proven, in order to conclude that there exists exactly one object satisfying a said condition. An alternative way to prove uniqueness is to prove that there exists an object satisfying the condition, and then to prove that every object satisfying the condition must be equal to . Reduction to ordinary existential and universal quantification Uniqueness quantification can be expressed in terms of the existential and universal quantifiers of predicate logic, by defining the formula to mean which is logically equivalent to An equivalent definition that separates the notions of existence and uniqueness into two clauses, at the expense of brevity, is Another equivalent defin" https://en.wikipedia.org/wiki/UCI%20School%20of%20Biological%20Sciences,"The School of Biological Sciences is one of the academic units of the University of California, Irvine (UCI). The school is divided into four departments: developmental and cell biology, ecology and evolutionary biology, molecular biology and biochemistry, and neurobiology and behavior. With over 3,700 students it is in the top four largest schools in the university. In 2013, the Francisco J. Ayala School of Biological Sciences contained 19.4 percent of the student population It is consistently ranked in the top one hundred in U.S. News & World Report’s yearly list of best graduate schools. History The School of Biological Sciences first opened in 1965 at the University of California, Irvine and was one of the first schools founded when the university campus opened. The school's founding Dean, Edward A. Steinhaus, had four founding department chairs and started out with 17 professors. On March 12, 2014, the School was officially renamed after UCI professor and donor Francisco J. Ayala by then-Chancellor Michael V. Drake. Ayala had previously pledged to donate $10 million to the School of Biological Sciences in 2011. The school reverted to its previous name in June 2018, after a university investigation confirmed that Ayala had sexually harassed at least four women colleagues and graduate students. Notes External links University of California, Irvine Biology education Science education in the United States Science and technology in Greater Los Angeles University subdivisions in California Educational institutions established in 1965 1965 establishments in California" https://en.wikipedia.org/wiki/Defense%20Information%20System%20Network,"The Defense Information System Network (DISN) has been the United States Department of Defense's enterprise telecommunications network for providing data, video, and voice services for 40 years. The DISN end-to-end infrastructure is composed of three major segments: The sustaining base (I.e., base, post, camp, or station, and Service enterprise networks). The Command, Control, Communications, Computers and Intelligence (C4I) infrastructure will interface with the long-haul network to support the deployed warfighter. The sustaining base segment is primarily the responsibility of the individual Services. The long-haul transport infrastructure, which includes the communication systems and services between the fixed environments and the deployed Joint Task Force (JTF) and/or Coalition Task Force (CTF) warfighter. The long-haul telecommunications infrastructure segment is primarily the responsibility of DISA. The deployed warfighter, mobile users, and associated Combatant Commander telecommunications infrastructures are supporting the Joint Task Force (JTF) and/or Coalition Task Force (CTF). The deployed warfighter and associated Combatant Commander telecommunications infrastructure is primarily the responsibility of the individual Services. The DISN provides the following multiple networking services: Global Content Delivery System (GCDS) Data Services Sensitive but Unclassified (NIPRNet) Secret Data Services (SIPRNet) Multicast Organizational Messaging The Organizational Messaging Service provides a range of assured services to the customer community that includes the military services, DoD agencies, combatant commands (CCMDs), non-DoD U.S. government activities, and the Intelligence Community (IC). These services include the ability to exchange official information between military organizations and to support interoperability with allied nations, non-DoD activities, and the IC operating in both the strategic/fixed-base and the tactical/deployed enviro" https://en.wikipedia.org/wiki/Peptide%20microarray,"A peptide microarray (also commonly known as peptide chip or peptide epitope microarray) is a collection of peptides displayed on a solid surface, usually a glass or plastic chip. Peptide chips are used by scientists in biology, medicine and pharmacology to study binding properties and functionality and kinetics of protein-protein interactions in general. In basic research, peptide microarrays are often used to profile an enzyme (like kinase, phosphatase, protease, acetyltransferase, histone deacetylase etc.), to map an antibody epitope or to find key residues for protein binding. Practical applications are seromarker discovery, profiling of changing humoral immune responses of individual patients during disease progression, monitoring of therapeutic interventions, patient stratification and development of diagnostic tools and vaccines. Principle The assay principle of peptide microarrays is similar to an ELISA protocol. The peptides (up to tens of thousands in several copies) are linked to the surface of a glass chip typically the size and shape of a microscope slide. This peptide chip can directly be incubated with a variety of different biological samples like purified enzymes or antibodies, patient or animal sera, cell lysates and then be detected through a label-dependent fashion, for example, by a primary antibody that targets the bound protein or modified substrates. After several washing steps a secondary antibody with the needed specificity (e.g. anti IgG human/mouse or anti phosphotyrosine or anti myc) is applied. Usually, the secondary antibody is tagged by a fluorescence label that can be detected by a fluorescence scanner. Other label-dependent detection methods includes chemiluminescence, colorimetric or autoradiography. Label-dependent assays are rapid and convenient to perform, but risk giving rise to false positive and negative results. More recently, label-free detection including surface plasmon resonance (SPR) spectroscopy, mass spectrometry (" https://en.wikipedia.org/wiki/Pacemaker%20crosstalk,"Pacemaker crosstalk results when the pacemaker-generated electrical event in one chamber is sensed by the lead in another chamber, resulting in inappropriate inhibition of the pacing artifact in the second chamber. Cause Crosstalk can only occur in dual chamber or biventricular pacemaker. It happens less often in more recent models of dual chamber pacemakers due to the addition of a ventricular blanking period, which coincides with the atrial stimulus. This helps to prevent ventricular channel oversensing of atrial output. Newer dual chamber pacemakers also use bipolar leads with a smaller pacing spike, and steroid eluting leads with lower pacing thresholds. Crosstalk is more common in unipolar systems since they require a larger pacing spike. Crosstalk is sometimes referred to as crosstalk inhibition, far-field sensing, or self-inhibition. In some cases, crosstalk can occur in the pulse generator circuit itself, though more common causes include atrial lead dislodgement into the ventricle, ventricular lead dislodgement into the atrium, high atrial output current, high ventricular sensitivity, and short ventricular blanking period. Treatment In general, the treatment of crosstalk includes decreasing atrial pacing output, decreasing atrial pulse width, decreasing ventricular sensitivity, increasing the ventricular blanking period, activating ventricular safety pacing, and new atrial lead implant if insulation failure mandates unipolar programming. See also Pacemaker failure Electrical conduction system of the heart" https://en.wikipedia.org/wiki/Quadrature%20filter,"In signal processing, a quadrature filter is the analytic representation of the impulse response of a real-valued filter: If the quadrature filter is applied to a signal , the result is which implies that is the analytic representation of . Since is an analytic signal, it is either zero or complex-valued. In practice, therefore, is often implemented as two real-valued filters, which correspond to the real and imaginary parts of the filter, respectively. An ideal quadrature filter cannot have a finite support. It has single sided support, but by choosing the (analog) function carefully, it is possible to design quadrature filters which are localized such that they can be approximated by means of functions of finite support. A digital realization without feedback (FIR) has finite support. Applications This construction will simply assemble an analytic signal with a starting point to finally create a causal signal with finite energy. The two Delta Distributions will perform this operation. This will impose an additional constraint on the filter. Single frequency signals For single frequency signals (in practice narrow bandwidth signals) with frequency the magnitude of the response of a quadrature filter equals the signal's amplitude A times the frequency function of the filter at frequency . This property can be useful when the signal s is a narrow-bandwidth signal of unknown frequency. By choosing a suitable frequency function Q of the filter, we may generate known functions of the unknown frequency which then can be estimated. See also Analytic signal Hilbert transform Signal processing" https://en.wikipedia.org/wiki/ISO%2031-11,"ISO 31-11:1992 was the part of international standard ISO 31 that defines mathematical signs and symbols for use in physical sciences and technology. It was superseded in 2009 by ISO 80000-2:2009 and subsequently revised in 2019 as ISO-80000-2:2019. Its definitions include the following: Mathematical logic Sets Miscellaneous signs and symbols Operations Functions Exponential and logarithmic functions Circular and hyperbolic functions Complex numbers Matrices Coordinate systems Vectors and tensors Special functions See also Mathematical symbols Mathematical notation" https://en.wikipedia.org/wiki/Comparison%20theorem,"In mathematics, comparison theorems are theorems whose statement involves comparisons between various mathematical objects of the same type, and often occur in fields such as calculus, differential equations and Riemannian geometry. Differential equations In the theory of differential equations, comparison theorems assert particular properties of solutions of a differential equation (or of a system thereof), provided that an auxiliary equation/inequality (or a system thereof) possesses a certain property. Chaplygin inequality Grönwall's inequality, and its various generalizations, provides a comparison principle for the solutions of first-order ordinary differential equations. Sturm comparison theorem Aronson and Weinberger used a comparison theorem to characterize solutions to Fisher's equation, a reaction--diffusion equation. Hille-Wintner comparison theorem Riemannian geometry In Riemannian geometry, it is a traditional name for a number of theorems that compare various metrics and provide various estimates in Riemannian geometry. Rauch comparison theorem relates the sectional curvature of a Riemannian manifold to the rate at which its geodesics spread apart. Toponogov's theorem Myers's theorem Hessian comparison theorem Laplacian comparison theorem Morse–Schoenberg comparison theorem Berger comparison theorem, Rauch–Berger comparison theorem Berger–Kazdan comparison theorem Warner comparison theorem for lengths of N-Jacobi fields (N being a submanifold of a complete Riemannian manifold) Bishop–Gromov inequality, conditional on a lower bound for the Ricci curvatures Lichnerowicz comparison theorem Eigenvalue comparison theorem Cheng's eigenvalue comparison theorem See also: Comparison triangle Other Limit comparison theorem, about convergence of series Comparison theorem for integrals, about convergence of integrals Zeeman's comparison theorem, a technical tool from the theory of spectral sequences" https://en.wikipedia.org/wiki/Potassium%20in%20biology,"Potassium is the main intracellular ion for all types of cells, while having a major role in maintenance of fluid and electrolyte balance. Potassium is necessary for the function of all living cells, and is thus present in all plant and animal tissues. It is found in especially high concentrations within plant cells, and in a mixed diet, it is most highly concentrated in fruits. The high concentration of potassium in plants, associated with comparatively very low amounts of sodium there, historically resulted in potassium first being isolated from the ashes of plants (potash), which in turn gave the element its modern name. The high concentration of potassium in plants means that heavy crop production rapidly depletes soils of potassium, and agricultural fertilizers consume 93% of the potassium chemical production of the modern world economy. The functions of potassium and sodium in living organisms are quite different. Animals, in particular, employ sodium and potassium differentially to generate electrical potentials in animal cells, especially in nervous tissue. Potassium depletion in animals, including humans, results in various neurological dysfunctions. Characteristic concentrations of potassium in model organisms are: 30–300mM in E. coli, 300mM in budding yeast, 100mM in mammalian cell and 4mM in blood plasma. Function in plants The main role of potassium in plants is to provide the ionic environment for metabolic processes in the cytosol, and as such functions as a regulator of various processes including growth regulation. Plants require potassium ions (K+) for protein synthesis and for the opening and closing of stomata, which is regulated by proton pumps to make surrounding guard cells either turgid or flaccid. A deficiency of potassium ions can impair a plant's ability to maintain these processes. Potassium also functions in other physiological processes such as photosynthesis, protein synthesis, activation of some enzymes, phloem solute transport of" https://en.wikipedia.org/wiki/Multi-index%20notation,"Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices. Definition and basic properties An n-dimensional multi-index is an -tuple of non-negative integers (i.e. an element of the -dimensional set of natural numbers, denoted ). For multi-indices and , one defines: Componentwise sum and difference Partial order Sum of components (absolute value) Factorial Binomial coefficient Multinomial coefficient where . Power . Higher-order partial derivative where (see also 4-gradient). Sometimes the notation is also used. Some applications The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following, (or ), , and (or ). Multinomial theorem Multi-binomial theorem Note that, since is a vector and is a multi-index, the expression on the left is short for . Leibniz formula For smooth functions and , Taylor series For an analytic function in variables one has In fact, for a smooth enough function, we have the similar Taylor expansion where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets General linear partial differential operator A formal linear -th order partial differential operator in variables is written as Integration by parts For smooth functions with compact support in a bounded domain one has This formula is used for the definition of distributions and weak derivatives. An example theorem If are multi-indices and , then Proof The proof follows from the power rule for the ordinary derivative; if α and β are in , then Suppose , , and . Then we have that For each in , the function only depends on . In the above, each partial differe" https://en.wikipedia.org/wiki/Graph%20paper,"Graph paper, coordinate paper, grid paper, or squared paper is writing paper that is printed with fine lines making up a regular grid. The lines are often used as guides for plotting graphs of functions or experimental data and drawing curves. It is commonly found in mathematics and engineering education settings and in laboratory notebooks. Graph paper is available either as loose leaf paper or bound in notebooks. History The Metropolitan Museum of Art owns a pattern book dated to around 1596 in which each page bears a grid printed with a woodblock. The owner has used these grids to create block pictures in black and white and in colour. The first commercially published ""coordinate paper"" is usually attributed to a Dr. Buxton of England, who patented paper, printed with a rectangular coordinate grid, in 1794. A century later, E. H. Moore, a distinguished mathematician at the University of Chicago, advocated usage of paper with ""squared lines"" by students of high schools and universities. The 1906 edition of Algebra for Beginners by H. S. Hall and S. R. Knight included a strong statement that ""the squared paper should be of good quality and accurately ruled to inches and tenths of an inch. Experience shows that anything on a smaller scale (such as 'millimeter' paper) is practically worthless in the hands of beginners."" The term ""graph paper"" did not catch on quickly in American usage. A School Arithmetic (1919) by H. S. Hall and F. H. Stevens had a chapter on graphing with ""squared paper"". Analytic Geometry (1937) by W. A. Wilson and J. A. Tracey used the phrase ""coordinate paper"". The term ""squared paper"" remained in British usage for longer; for example it was used in Public School Arithmetic (2023) by W. M. Baker and A. A. Bourne published in London. Formats Quad paper, sometimes referred to as quadrille paper from French quadrillé, 'large square', is a common form of graph paper with a sparse grid printed in light blue or gray and right to the edge of the" https://en.wikipedia.org/wiki/Food%20history,"Food history is an interdisciplinary field that examines the history and the cultural, economic, environmental, and sociological impacts of food and human nutrition. It is considered distinct from the more traditional field of culinary history, which focuses on the origin and recreation of specific recipes. The first journal in the field, Petits Propos Culinaires, was launched in 1979 and the first conference on the subject was the 1981 Oxford Food Symposium. Food and diets in history Early human nutrition was largely determined by the availability and palatability (tastiness) of foods. Humans evolved as omnivorous hunter-gatherers, though our diet has varied significantly depending on location and climate. The diet in the tropics tended to depend more heavily on plant foods, while the diet at higher latitudes tended more towards animal products. Analyses of postcranial and cranial remains of humans and animals from the Neolithic, along with detailed bone-modification studies, have shown that cannibalism also occurred among prehistoric humans. Agriculture developed at different times in different places, starting about 11,500 years ago, providing some cultures with a more abundant supply of grains (such as wheat, rice and maize) and potatoes; this made possible dough for staples such as bread, pasta, and tortillas. The domestication of animals provided some cultures with milk and dairy products. In 2020, archeological research discovered a frescoed thermopolium (a fast-food counter) in an exceptional state of preservation from 79 CE/AD in Pompeii, including 2,000-year-old foods available in some of the deep terra cotta jars. Classical antiquity During classical antiquity, diets consisted of simple fresh or preserved whole foods that were either locally grown or transported from neighboring areas during times of crisis. 5th to 15th century: Middle Ages in Western Europe In western Europe, medieval cuisine (5th–15th century) did not change rapidly. Cereal" https://en.wikipedia.org/wiki/List%20of%20mathematical%20examples,"This page will attempt to list examples in mathematics. To qualify for inclusion, an article should be about a mathematical object with a fair amount of concreteness. Usually a definition of an abstract concept, a theorem, or a proof would not be an ""example"" as the term should be understood here (an elegant proof of an isolated but particularly striking fact, as opposed to a proof of a general theorem, could perhaps be considered an ""example""). The discussion page for list of mathematical topics has some comments on this. Eventually this page may have its own discussion page. This page links to itself in order that edits to this page will be included among related changes when the user clicks on that button. The concrete example within the article titled Rao-Blackwell theorem is perhaps one of the best ways for a probabilist ignorant of statistical inference to get a quick impression of the flavor of that subject. Uncategorized examples, alphabetized Alexander horned sphere All horses are the same color Cantor function Cantor set Checking if a coin is biased Concrete illustration of the central limit theorem Differential equations of mathematical physics Dirichlet function Discontinuous linear map Efron's non-transitive dice Example of a game without a value Examples of contour integration Examples of differential equations Examples of generating functions Examples of groups List of the 230 crystallographic 3D space groups Examples of Markov chains Examples of vector spaces Fano plane Frieze group Gray graph Hall–Janko graph Higman–Sims graph Hilbert matrix Illustration of a low-discrepancy sequence Illustration of the central limit theorem An infinitely differentiable function that is not analytic Leech lattice Lewy's example on PDEs List of finite simple groups Long line Normally distributed and uncorrelated does not imply independent Pairwise independence of random variables need not imply mutual independence. Petersen graph Sierpinski space Simple examp" https://en.wikipedia.org/wiki/Electrophoretic%20mobility%20shift%20assay,"An electrophoretic mobility shift assay (EMSA) or mobility shift electrophoresis, also referred as a gel shift assay, gel mobility shift assay, band shift assay, or gel retardation assay, is a common affinity electrophoresis technique used to study protein–DNA or protein–RNA interactions. This procedure can determine if a protein or mixture of proteins is capable of binding to a given DNA or RNA sequence, and can sometimes indicate if more than one protein molecule is involved in the binding complex. Gel shift assays are often performed in vitro concurrently with DNase footprinting, primer extension, and promoter-probe experiments when studying transcription initiation, DNA gang replication, DNA repair or RNA processing and maturation, as well as pre-mRNA splicing. Although precursors can be found in earlier literature, most current assays are based on methods described by Garner and Revzin and Fried and Crothers. Principle A mobility shift assay is electrophoretic separation of a protein–DNA or protein–RNA mixture on a polyacrylamide or agarose gel for a short period (about 1.5-2 hr for a 15- to 20-cm gel). The speed at which different molecules (and combinations thereof) move through the gel is determined by their size and charge, and to a lesser extent, their shape (see gel electrophoresis). The control lane (DNA probe without protein present) will contain a single band corresponding to the unbound DNA or RNA fragment. However, assuming that the protein is capable of binding to the fragment, the lane with a protein that binds present will contain another band that represents the larger, less mobile complex of nucleic acid probe bound to protein which is 'shifted' up on the gel (since it has moved more slowly). Under the correct experimental conditions, the interaction between the DNA (or RNA) and protein is stabilized and the ratio of bound to unbound nucleic acid on the gel reflects the fraction of free and bound probe molecules as the binding reaction ent" https://en.wikipedia.org/wiki/Overlay%20network,"An overlay network is a computer network that is layered on top of another network. Structure Nodes in the overlay network can be thought of as being connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. For example, distributed systems such as peer-to-peer networks and client–server applications are overlay networks because their nodes run on top of the Internet. The Internet was originally built as an overlay upon the telephone network, while today (through the advent of VoIP), the telephone network is increasingly turning into an overlay network built on top of the Internet. Uses Enterprise networks Enterprise private networks were first overlaid on telecommunication networks such as Frame Relay and Asynchronous Transfer Mode packet switching infrastructures but migration from these (now legacy) infrastructures to IP based MPLS networks and virtual private networks started (2001~2002). From a physical standpoint, overlay networks are quite complex (see Figure 1) as they combine various logical layers that are operated and built by various entities (businesses, universities, government etc.) but they allow separation of concerns that over time permitted the buildup of a broad set of services that could not have been proposed by a single telecommunication operator (ranging from broadband Internet access, voice over IP or IPTV, competitive telecom operators etc.). Internet Telecommunication transport networks and IP networks (which combined make up the broader Internet) are all overlaid with at least an optical fiber layer, a transport layer and an IP or circuit switching layers (in the case of the PSTN). Over the Internet Nowadays the Internet is the basis for more overlaid networks that can be constructed in order to permit routing of messages to destinations not specified by an IP address. For example, distributed hash tables can be used to route messages to a node" https://en.wikipedia.org/wiki/Load%E2%80%93store%20architecture,"In computer engineering, a load–store architecture (or a register–register architecture) is an instruction set architecture that divides instructions into two categories: memory access (load and store between memory and registers) and ALU operations (which only occur between registers). Some RISC architectures such as PowerPC, SPARC, RISC-V, ARM, and MIPS are load–store architectures. For instance, in a load–store approach both operands and destination for an ADD operation must be in registers. This differs from a register–memory architecture (for example, a CISC instruction set architecture such as x86) in which one of the operands for the ADD operation may be in memory, while the other is in a register. The earliest example of a load–store architecture was the CDC 6600. Almost all vector processors (including many GPUs) use the load–store approach. See also Load–store unit Register–memory architecture" https://en.wikipedia.org/wiki/Perceptual%20trap,"A perceptual trap is an ecological scenario in which environmental change, typically anthropogenic, leads an organism to avoid an otherwise high-quality habitat. The concept is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat. History In a 2004 article discussing source–sink dynamics, James Battin did not distinguish between high-quality habitats that are preferred or avoided, labelling both ""sources."" The latter scenario, in which a high-quality habitat is avoided, was first recognised as an important phenomenon in 2007 by Gilroy and Sutherland, who described them as ""undervalued resources."" The term ""perceptual trap"" was first proposed by Michael Patten and Jeffrey Kelly in a 2010 article. Hans Van Dyck argues that the term is misleading because perception is also a major component in other cases of trapping. Description Animals use discrete environmental cues to select habitat. A perceptual trap occurs if change in an environmental cue leads an organism to avoid a high-quality habitat. It differs, therefore, from simple habitat avoidance, which may be a correct decision given the habitat's quality. The concept of a perceptual trap is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat. There is expected to be strong natural selection against ecological traps, but not necessarily against perceptual traps, as Allee effects may restrict a population’s ability to establish itself. Examples To support the concept of a perceptual trap, Patten and Kelly cited a study of the lesser prairie chicken (Tympanuchus pallidicinctus). The species' natural environment, shinnery oak grassland, is often treated with the herbicide tebuthiuron to increase grass cover for cattle grazing. Herbicide treatment resulted in less shrub cover, a habitat cue that caused female lesser prairie-chickens to avoid the habitat in favour of untreated areas. However" https://en.wikipedia.org/wiki/List%20of%20quantum-mechanical%20systems%20with%20analytical%20solutions,"Much insight in quantum mechanics can be gained from understanding the closed-form solutions to the time-dependent non-relativistic Schrödinger equation. It takes the form where is the wave function of the system, is the Hamiltonian operator, and is time. Stationary states of this equation are found by solving the time-independent Schrödinger equation, which is an eigenvalue equation. Very often, only numerical solutions to the Schrödinger equation can be found for a given physical system and its associated potential energy. However, there exists a subset of physical systems for which the form of the eigenfunctions and their associated energies, or eigenvalues, can be found. These quantum-mechanical systems with analytical solutions are listed below. Solvable systems The two-state quantum system (the simplest possible quantum system) The free particle The delta potential The double-well Dirac delta potential The particle in a box / infinite potential well The finite potential well The one-dimensional triangular potential The particle in a ring or ring wave guide The particle in a spherically symmetric potential The quantum harmonic oscillator The quantum harmonic oscillator with an applied uniform field The hydrogen atom or hydrogen-like atom e.g. positronium The hydrogen atom in a spherical cavity with Dirichlet boundary conditions The particle in a one-dimensional lattice (periodic potential) The particle in a one-dimensional lattice of finite length The Morse potential The Mie potential The step potential The linear rigid rotor The symmetric top The Hooke's atom The Spherium atom Zero range interaction in a harmonic trap The quantum pendulum The rectangular potential barrier The Pöschl–Teller potential The Inverse square root potential Multistate Landau–Zener models The Luttinger liquid (the only exact quantum mechanical solution to a model including interparticle interactions) See also List of quantum-mechanical potentials – a list of physically " https://en.wikipedia.org/wiki/Like%20terms,"In mathematics, like terms are summands in a sum that differ only by a numerical factor. Like terms can be regrouped by adding their coefficients. Typically, in a polynomial expression, like terms are those that contain the same variables to the same powers, possibly with different coefficients. More generally, when some variable are considered as parameters, like terms are defined similarly, but ""numerical factors"" must be replaced by ""factors depending only on the parameters"". For example, when considering a quadratic equation, one considers often the expression where and are the roots of the equation and may be considered as parameters. Then, expanding the above product and regrouping the like terms gives Generalization In this discussion, a ""term"" will refer to a string of numbers being multiplied or divided (that division is simply multiplication by a reciprocal) together. Terms are within the same expression and are combined by either addition or subtraction. For example, take the expression: There are two terms in this expression. Notice that the two terms have a common factor, that is, both terms have an . This means that the common factor variable can be factored out, resulting in If the expression in parentheses may be calculated, that is, if the variables in the expression in the parentheses are known numbers, then it is simpler to write the calculation . and juxtapose that new number with the remaining unknown number. Terms combined in an expression with a common, unknown factor (or multiple unknown factors) are called like terms. Examples Example To provide an example for above, let and have numerical values, so that their sum may be calculated. For ease of calculation, let and . The original expression becomes which may be factored into or, equally, . This demonstrates that The known values assigned to the unlike part of two or more terms are called coefficients. As this example shows, when like terms exist in an expression, they m" https://en.wikipedia.org/wiki/Cache%20pollution,"Cache pollution describes situations where an executing computer program loads data into CPU cache unnecessarily, thus causing other useful data to be evicted from the cache into lower levels of the memory hierarchy, degrading performance. For example, in a multi-core processor, one core may replace the blocks fetched by other cores into shared cache, or prefetched blocks may replace demand-fetched blocks from the cache. Example Consider the following illustration: T[0] = T[0] + 1; for i in 0..sizeof(CACHE) C[i] = C[i] + 1; T[0] = T[0] + C[sizeof(CACHE)-1]; (The assumptions here are that the cache is composed of only one level, it is unlocked, the replacement policy is pseudo-LRU, all data is cacheable, the set associativity of the cache is N (where N > 1), and at most one processor register is available to contain program values). Right before the loop starts, T[0] will be fetched from memory into cache, its value updated. However, as the loop executes, because the number of data elements the loop references requires the whole cache to be filled to its capacity, the cache block containing T[0] has to be evicted. Thus, the next time the program requests T[0] to be updated, the cache misses, and the cache controller has to request the data bus to bring the corresponding cache block from main memory again. In this case the cache is said to be ""polluted"". Changing the pattern of data accesses by positioning the first update of T[0] between the loop and the second update can eliminate the inefficiency: for i in 0..sizeof(CACHE) C[i] = C[i] + 1; T[0] = T[0] + 1; T[0] = T[0] + C[sizeof(CACHE)-1]; Solutions Other than code-restructuring mentioned above, the solution to cache pollution is ensure that only high-reuse data are stored in cache. This can be achieved by using special cache control instructions, operating system support or hardware support. Examples of specialized hardware instructions include ""lvxl"" provided by PowerPC AltiVec. T" https://en.wikipedia.org/wiki/Mathematical%20beauty,"Mathematical beauty is the aesthetic pleasure derived from the abstractness, purity, simplicity, depth or orderliness of mathematics. Mathematicians may express this pleasure by describing mathematics (or, at least, some aspect of mathematics) as beautiful or describe mathematics as an art form, (a position taken by G. H. Hardy) or, at a minimum, as a creative activity. Comparisons are made with music and poetry. In method Mathematicians describe an especially pleasing method of proof as elegant. Depending on context, this may mean: A proof that uses a minimum of additional assumptions or previous results. A proof that is unusually succinct. A proof that derives a result in a surprising way (e.g., from an apparently unrelated theorem or a collection of theorems). A proof that is based on new and original insights. A method of proof that can be easily generalized to solve a family of similar problems. In the search for an elegant proof, mathematicians often look for different independent ways to prove a result—as the first proof that is found can often be improved. The theorem for which the greatest number of different proofs have been discovered is possibly the Pythagorean theorem, with hundreds of proofs being published up to date. Another theorem that has been proved in many different ways is the theorem of quadratic reciprocity. In fact, Carl Friedrich Gauss alone had eight different proofs of this theorem, six of which he published. Conversely, results that are logically correct but involve laborious calculations, over-elaborate methods, highly conventional approaches or a large number of powerful axioms or previous results are usually not considered to be elegant, and may be even referred to as ugly or clumsy. In results Some mathematicians see beauty in mathematical results that establish connections between two areas of mathematics that at first sight appear to be unrelated. These results are often described as deep. While it is difficult to f" https://en.wikipedia.org/wiki/Biocontainment,"One use of the concept of biocontainment is related to laboratory biosafety and pertains to microbiology laboratories in which the physical containment of pathogenic organisms or agents (bacteria, viruses, and toxins) is required, usually by isolation in environmentally and biologically secure cabinets or rooms, to prevent accidental infection of workers or release into the surrounding community during scientific research. Another use of the term relates to facilities for the study of agricultural pathogens, where it is used similarly to the term ""biosafety"", relating to safety practices and procedures used to prevent unintended infection of plants or animals or the release of high-consequence pathogenic agents into the environment (air, soil, or water). Terminology The World Health Organization's 2006 publication, Biorisk management: Laboratory biosecurity guidance, defines laboratory biosafety as ""the containment principles, technologies and practices that are implemented to prevent the unintentional exposure to pathogens and toxins, or their accidental release"". It defines biorisk management as ""the analysis of ways and development of strategies to minimize the likelihood of the occurrence of biorisks"". The term ""biocontainment"" is related to laboratory biosafety. Merriam-Webster's online dictionary reports the first use of the term in 1966, defined as ""the containment of extremely pathogenic organisms (such as viruses) usually by isolation in secure facilities to prevent their accidental release especially during research"". The term laboratory biosafety refers to the measures taken ""to reduce the risk of accidental release of or exposure to infectious disease agents"", whereas laboratory biosecurity is usually taken to mean ""a set of systems and practices employed in legitimate bioscience facilities to reduce the risk that dangerous biological agents will be stolen and used maliciously"". Containment types Laboratory context Primary containment is the first" https://en.wikipedia.org/wiki/I/O%20virtualization,"In virtualization, input/output virtualization (I/O virtualization) is a methodology to simplify management, lower costs and improve performance of servers in enterprise environments. I/O virtualization environments are created by abstracting the upper layer protocols from the physical connections. The technology enables one physical adapter card to appear as multiple virtual network interface cards (vNICs) and virtual host bus adapters (vHBAs). Virtual NICs and HBAs function as conventional NICs and HBAs, and are designed to be compatible with existing operating systems, hypervisors, and applications. To networking resources (LANs and SANs), they appear as normal cards. In the physical view, virtual I/O replaces a server’s multiple I/O cables with a single cable that provides a shared transport for all network and storage connections. That cable (or commonly two cables for redundancy) connects to an external device, which then provides connections to the data center networks. Background Server I/O is a critical component to successful and effective server deployments, particularly with virtualized servers. To accommodate multiple applications, virtualized servers demand more network bandwidth and connections to more networks and storage. According to a survey, 75% of virtualized servers require 7 or more I/O connections per device, and are likely to require more frequent I/O reconfigurations. In virtualized data centers, I/O performance problems are caused by running numerous virtual machines (VMs) on one server. In early server virtualization implementations, the number of virtual machines per server was typically limited to six or less. But it was found that it could safely run seven or more applications per server, often using 80 percent of total server capacity, an improvement over the average 5 to 15 percent utilized with non-virtualized servers . However, increased utilization created by virtualization placed a significant strain on the server’s I/O cap" https://en.wikipedia.org/wiki/Xenohormesis,"Xenohormesis is a hypothesis that posits that certain molecules such as plant polyphenols, which indicate stress in the plants, can have benefits of another organism (heterotrophs) which consumes it. Or in simpler terms, xenohormesis is interspecies hormesis. The expected benefits include improve lifespan and fitness, by activating the animal's cellular stress response. This may be useful to evolve, as it gives possible cues about the state of the environment. If the plants an animal is eating have increased polyphenol content, it means the plant is under stress and may signal famines. Using the chemical cues the heterotophs could preemptively prepare and defend itself before conditions worsen. A possible example may be resveratrol, which is famously found in red wine, which modulates over two dozen receptors and enzymes in mammals. Xenohormesis could also explain several phenomena seen in the ethno-pharmaceutical (traditional medicine) side of things. Such as in the case of cinnamon, which in several studies have shown to help treat type 2 diabetes, but hasn't been confirmed in meta analysis. This can be caused by the cinnamon used in one study differing from the other in xenohormic properties. Some explanations as to why this works, is first and foremost, it could be a coincidence. Especially for cases which partially venomous products, cause a positive stress in the organism. The second is that it is a shared evolutionary attribute, as both animals and plants share a huge amount of homology between their pathways. The third is that there is evolutionary pressure to evolve to better respond to the molecules. The latter is proposed mainly by Howitz and his team. There also might be the problem that our focus on maximizing the crop output, may be losing many of the xenohormetic advantages. Although the ideal conditions will cause the plant to increase its crop output it can also be argued it is loosing stress and therefore the hormesis. The honeybee colony colla" https://en.wikipedia.org/wiki/View%20model,"A view model or viewpoints framework in systems engineering, software engineering, and enterprise engineering is a framework which defines a coherent set of views to be used in the construction of a system architecture, software architecture, or enterprise architecture. A view is a representation of the whole system from the perspective of a related set of concerns. Since the early 1990s there have been a number of efforts to prescribe approaches for describing and analyzing system architectures. These recent efforts define a set of views (or viewpoints). They are sometimes referred to as architecture frameworks or enterprise architecture frameworks, but are usually called ""view models"". Usually a view is a work product that presents specific architecture data for a given system. However, the same term is sometimes used to refer to a view definition, including the particular viewpoint and the corresponding guidance that defines each concrete view. The term view model is related to view definitions. Overview The purpose of views and viewpoints is to enable humans to comprehend very complex systems, to organize the elements of the problem and the solution around domains of expertise and to separate concerns. In the engineering of physically intensive systems, viewpoints often correspond to capabilities and responsibilities within the engineering organization. Most complex system specifications are so extensive that no single individual can fully comprehend all aspects of the specifications. Furthermore, we all have different interests in a given system and different reasons for examining the system's specifications. A business executive will ask different questions of a system make-up than would a system implementer. The concept of viewpoints framework, therefore, is to provide separate viewpoints into the specification of a given complex system in order to facilitate communication with the stakeholders. Each viewpoint satisfies an audience with interest in a pa" https://en.wikipedia.org/wiki/Virtual%20firewall,"A virtual firewall (VF) is a network firewall service or appliance running entirely within a virtualized environment and which provides the usual packet filtering and monitoring provided via a physical network firewall. The VF can be realized as a traditional software firewall on a guest virtual machine already running, a purpose-built virtual security appliance designed with virtual network security in mind, a virtual switch with additional security capabilities, or a managed kernel process running within the host hypervisor. Background So long as a computer network runs entirely over physical hardware and cabling, it is a physical network. As such it can be protected by physical firewalls and fire walls alike; the first and most important protection for a physical computer network always was and remains a physical, locked, flame-resistant door. Since the inception of the Internet this was the case, and structural fire walls and network firewalls were for a long time both necessary and sufficient. Since about 1998 there has been an explosive increase in the use of virtual machines (VM) in addition to — sometimes instead of — physical machines to offer many kinds of computer and communications services on local area networks and over the broader Internet. The advantages of virtual machines are well explored elsewhere. Virtual machines can operate in isolation (for example as a guest operating system on a personal computer) or under a unified virtualized environment overseen by a supervisory virtual machine monitor or ""hypervisor"" process. In the case where many virtual machines operate under the same virtualized environment they might be connected together via a virtual network consisting of virtualized network switches between machines and virtualized network interfaces within machines. The resulting virtual network could then implement traditional network protocols (for example TCP) or virtual network provisioning such as VLAN or VPN, though the latter while u" https://en.wikipedia.org/wiki/Tokogeny,"Tokogeny or tocogeny is the biological relationship between parent and offspring, or more generally between ancestors and descendants. In contradistinction to phylogeny it applies to individual organisms as opposed to species. In the tokogentic system shared characteristics are called traits." https://en.wikipedia.org/wiki/Koch%27s%20postulates,"Koch's postulates ( ) are four criteria designed to establish a causal relationship between a microbe and a disease. The postulates were formulated by Robert Koch and Friedrich Loeffler in 1884, based on earlier concepts described by Jakob Henle, and the statements were refined and published by Koch in 1890. Koch applied the postulates to describe the etiology of cholera and tuberculosis, both of which are now ascribed to bacteria. The postulates have been controversially generalized to other diseases. More modern concepts in microbial pathogenesis cannot be examined using Koch's postulates, including viruses (which are obligate intracellular parasites) and asymptomatic carriers. They have largely been supplanted by other criteria such as the Bradford Hill criteria for infectious disease causality in modern public health and the Molecular Koch's postulates for microbial pathogenesis. Postulates Koch's four postulates are: The microorganism must be found in abundance in all organisms suffering from the disease but should not be found in healthy organisms. The microorganism must be isolated from a diseased organism and grown in pure culture. The cultured microorganism should cause disease when introduced into a healthy organism. The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent. However, Koch later abandoned the universalist requirement of the first postulate when he discovered asymptomatic carriers of cholera and, later, of typhoid fever. Subclinical infections and asymptomatic carriers are now known to be a common feature of many infectious diseases, especially viral diseases such as polio, herpes simplex, HIV/AIDS, hepatitis C, and COVID-19. For example, poliovirus only causes paralysis in a small percentage of those infected. The second postulate does not apply to pathogens incapable of growing in pure culture. For example, viruses are dependent" https://en.wikipedia.org/wiki/Field-programmable%20object%20array,"A field-programmable object array (FPOA) is a class of programmable logic devices designed to be modified or programmed after manufacturing. They are designed to bridge the gap between ASIC and FPGA. They contain a grid of programmable silicon objects. Arrix range of FPOA contained three types of silicon objects: arithmetic logic units (ALUs), register files (RFs) and multiply-and-accumulate units (MACs). Both the objects and interconnects are programmable. Motivation and history The device was intended to bridge the gap between field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). The design goal was to combine the programmability of FPGAs and the performance of ASICs. FPGAs, although programmable, lack performance; they may only be clocked to few hundreds of megahertz and most FPGAs operated below 100 MHz. FPGAs did not offer deterministic timing and the maximum operating frequency depends on the design. ASICs offered good performance, but they could not be modified and they were very costly. The FPOA had a programmable architecture, deterministic timing, and gigahertz performance. The FPOA was designed by Douglas Pihl who had this idea when working on a DARPA funded project. He founded MathStar in 1997 to manufacture FPOAs and the idea was patented in 2004. The first FPOA prototypes were made in 2005 and first batch of FPOA chips were fabricated in 2006. Architecture FPOAs have a core grid of silicon objects or core objects. These objects are connected through a synchronous interconnect. Each core object also has a supporting structures for clock synchronization, BIST and the like. The core is surrounded by peripheral circuitry that contains memory and I/O. An interface circuitry connects the objects to rest of FPOA. Exact number of each type of object and its arrangement are specific to a given family. There are two types of communication: nearest member and ""party-line"". Nearest member is used to connect a core to nea" https://en.wikipedia.org/wiki/RCA%20CDP1861,"The RCA CDP1861 was an integrated circuit Video Display Controller, released by the Radio Corporation of America (RCA) in the mid-1970s as a support chip for the RCA 1802 microprocessor. The chip cost in 1977 amounted to less than US$20. History The CDP1861 was manufactured in a low-power CMOS technology, came in a 24-pin DIP (Dual in-line package), and required a minimum of external components to work. In 1802-based microcomputers, the CDP1861 (for the NTSC video format, CDP1864 variant for PAL), used the 1802's built-in DMA controller to display black and white (monochrome) bitmapped graphics on standard TV screens. The CDP1861 was also known as the Pixie graphics system, display, chip, and video generator, especially when used with the COSMAC ELF microcomputer. Other known chip markings for the 1861 are TA10171, TA10171V1 and a TA10171X, which were early designations for ""pre-qualification engineering samples"" and ""preliminary part numbers"", although they have been found in production RCA Studio II game consoles and Netronics Elf microcomputers. The CDP1861 was also used in the Telmac 1800 and Oscom Nano microcomputers. Specifications The 1861 chip could display 64 pixels horizontally and 128 pixels vertically, though by reloading the 1802's R0 DMA (direct memory access) register via the required 1802 software controller program and interrupt service routine, the resolution could be reduced to 64×64 or 64×32 to use less memory than the 1024 bytes needed for the highest resolution (with each monochrome pixel occupying one bit) or to display square pixels. A resolution of 64×32 created square pixels and used 256 bytes of memory (2K bits). This was the usual resolution for the Chip-8 game programming system. Since the video graphics frame buffer was often similar or equal in size to the memory size, it was not unusual to display your program/data on the screen allowing you to watch the computer ""think"" (i.e. process its data). Programs which ran amok and accidenta" https://en.wikipedia.org/wiki/Laws%20of%20robotics,"Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence. The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then. Isaac Asimov's ""Three Laws of Robotics"" The best known set of laws are Isaac Asimov's ""Three Laws of Robotics"". These were introduced in his 1942 short story ""Runaround"", although they were foreshadowed in a few earlier stories. The Three Laws are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. In The Evitable Conflict the machines generalize the First Law to mean: No machine may harm humanity; or, through inaction, allow humanity to come to harm. This was refined in the end of Foundation and Earth, a zeroth law was introduced, with the original three suitably rewritten as subordinate to it: Adaptations and extensions exist based upon this framework. As of 2021 they remain a ""fictional device"". EPSRC / AHRC principles of robotics In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) of United Kingdom jointly published a set of five ethical ""principles for designers, builders and users of robots"" in the real world, along with seven ""high-level messages"" intended to be conveyed, based on a September 2010 research workshop: Robots should not be de" https://en.wikipedia.org/wiki/Index%20of%20wave%20articles,"This is a list of wave topics. 0–9 21 cm line A Abbe prism Absorption spectroscopy Absorption spectrum Absorption wavemeter Acoustic wave Acoustic wave equation Acoustics Acousto-optic effect Acousto-optic modulator Acousto-optics Airy disc Airy wave theory Alfvén wave Alpha waves Amphidromic point Amplitude Amplitude modulation Animal echolocation Antarctic Circumpolar Wave Antiphase Aquamarine Power Arrayed waveguide grating Artificial wave Atmospheric diffraction Atmospheric wave Atmospheric waveguide Atom laser Atomic clock Atomic mirror Audience wave Autowave Averaged Lagrangian B Babinet's principle Backward wave oscillator Bandwidth-limited pulse beat Berry phase Bessel beam Beta wave Black hole Blazar Bloch's theorem Blueshift Boussinesq approximation (water waves) Bow wave Bragg diffraction Bragg's law Breaking wave Bremsstrahlung, Electromagnetic radiation Brillouin scattering Bullet bow shockwave Burgers' equation Business cycle C Capillary wave Carrier wave Cherenkov radiation Chirp Ernst Chladni Circular polarization Clapotis Closed waveguide Cnoidal wave Coherence (physics) Coherence length Coherence time Cold wave Collimated light Collimator Compton effect Comparison of analog and digital recording Computation of radiowave attenuation in the atmosphere Continuous phase modulation Continuous wave Convective heat transfer Coriolis frequency Coronal mass ejection Cosmic microwave background radiation Coulomb wave function Cutoff frequency Cutoff wavelength Cymatics D Damped wave Decollimation Delta wave Dielectric waveguide Diffraction Direction finding Dispersion (optics) Dispersion (water waves) Dispersion relation Dominant wavelength Doppler effect Doppler radar Douglas Sea Scale Draupner wave Droplet-shaped wave Duhamel's principle E E-skip Earthquake Echo (phenomenon) Echo sounding Echolocation (animal) Echolocation (human) Eddy (fluid dynamics) Edge wave Eikonal equation Ekman layer Ekman spiral Ekman transport El Niño–Southern Oscillation El" https://en.wikipedia.org/wiki/Ns%20%28simulator%29,"ns (from network simulator) is a name for a series of discrete event network simulators, specifically ns-1, ns-2, and ns-3. All are discrete-event computer network simulators, primarily used in research and teaching. History ns-1 The first version of ns, known as ns-1, was developed at Lawrence Berkeley National Laboratory (LBNL) in the 1995-97 timeframe by Steve McCanne, Sally Floyd, Kevin Fall, and other contributors. This was known as the LBNL Network Simulator, and derived in 1989 from an earlier simulator known as REAL by S. Keshav. ns-2 Ns-2 began as a revision of ns-1. From 1997 to 2000, ns development was supported by DARPA through the VINT project at LBL, Xerox PARC, UC Berkeley, and USC/ISI. In 2000, ns-2 development was supported through DARPA with SAMAN and through NSF with CONSER, both at USC/ISI, in collaboration with other researchers including ACIRI. Features of NS2 1. It is a discrete event simulator for networking research. 2. It provides substantial support to simulate bunch of protocols like TCP, FTP, UDP, https and DSR. 3. It simulates wired and wireless network. 4. It is primarily Unix based. 5. Uses TCL as its scripting languages. 6. Otcl: Object oriented support 7. Tclcl: C++ and otcl linkage 8. Discrete event schedule Ns-2 incorporates substantial contributions from third parties, including wireless code from the UCB Daedelus and CMU Monarch projects and Sun Microsystems. ns-3 In 2003, a team led by Tom Henderson, George Riley, Sally Floyd, and Sumit Roy, applied for and received funding from the U.S. National Science Foundation (NSF) to build a replacement for ns-2, called ns-3. This team collaborated with the Planete project of INRIA at Sophia Antipolis, with Mathieu Lacage as the software lead, and formed a new open source project. In the process of developing ns-3, it was decided to completely abandon backward-compatibility with ns-2. The new simulator would be written from scratch, using the C++ programming language" https://en.wikipedia.org/wiki/Convergence%20%28routing%29,"Convergence is the state of a set of routers that have the same topological information about the internetwork in which they operate. For a set of routers to have converged, they must have collected all available topology information from each other via the implemented routing protocol, the information they gathered must not contradict any other router's topology information in the set, and it must reflect the real state of the network. In other words: in a converged network all routers ""agree"" on what the network topology looks like. Convergence is an important notion for a set of routers that engage in dynamic routing. All Interior Gateway Protocols rely on convergence to function properly. ""To have, or be, converged"" is the normal state of an operational autonomous system. The Exterior Gateway Routing Protocol BGP typically never converges because the Internet is too big for changes to be communicated fast enough. Convergence process When a routing protocol process is enabled, every participating router will attempt to exchange information about the topology of the network. The extent of this information exchange, the way it is sent and received, and the type of information required vary widely depending on the routing protocol in use, see e.g. RIP, OSPF, BGP4. A state of convergence is achieved once all routing protocol-specific information has been distributed to all routers participating in the routing protocol process. Any change in the network that affects routing tables will break the convergence temporarily until this change has been successfully communicated to all other routers. Convergence time Convergence time is a measure of how fast a group of routers reach the state of convergence. It is one of the main design goals and an important performance indicator for routing protocols, which should implement a mechanism that allows all routers running the protocol to quickly and reliably converge. Of course, the size of the network also plays an imp" https://en.wikipedia.org/wiki/Memory%20hierarchy,"In computer organisation, the memory hierarchy separates computer storage into a hierarchy based on response time. Since response time, complexity, and capacity are related, the levels may also be distinguished by their performance and controlling technologies. Memory hierarchy affects performance in computer architectural design, algorithm predictions, and lower level programming constructs involving locality of reference. Designing for high performance requires considering the restrictions of the memory hierarchy, i.e. the size and capabilities of each component. Each of the various components can be viewed as part of a hierarchy of memories in which each member is typically smaller and faster than the next highest member of the hierarchy. To limit waiting by higher levels, a lower level will respond by filling a buffer and then signaling for activating the transfer. There are four major storage levels. Internal – Processor registers and cache. Main – the system RAM and controller cards. On-line mass storage – Secondary storage. Off-line bulk storage – Tertiary and Off-line storage. This is a general memory hierarchy structuring. Many other structures are useful. For example, a paging algorithm may be considered as a level for virtual memory when designing a computer architecture, and one can include a level of nearline storage between online and offline storage. Properties of the technologies in the memory hierarchy Adding complexity slows down the memory hierarchy. CMOx memory technology stretches the Flash space in the memory hierarchy One of the main ways to increase system performance is minimising how far down the memory hierarchy one has to go to manipulate data. Latency and bandwidth are two metrics associated with caches. Neither of them is uniform, but is specific to a particular component of the memory hierarchy. Predicting where in the memory hierarchy the data resides is difficult. ...the location in the memory hierarchy dictates t" https://en.wikipedia.org/wiki/Reproductive%20interference,"Reproductive interference is the interaction between individuals of different species during mate acquisition that leads to a reduction of fitness in one or more of the individuals involved. The interactions occur when individuals make mistakes or are unable to recognise their own species, labelled as ‘incomplete species recognition'. Reproductive interference has been found within a variety of taxa, including insects, mammals, birds, amphibians, marine organisms, and plants. There are seven causes of reproductive interference, namely signal jamming, heterospecific rivalry, misdirected courtship, heterospecific mating attempts, erroneous female choice, heterospecific mating, and hybridisation. All types have fitness costs on the participating individuals, generally from a reduction in reproductive success, a waste of gametes, and the expenditure of energy and nutrients. These costs are variable and dependent on numerous factors, such as the cause of reproductive interference, the sex of the parent, and the species involved. Reproductive interference occurs between species that occupy the same habitat and can play a role in influencing the coexistence of these species. It differs from competition as reproductive interference does not occur due to a shared resource. Reproductive interference can have ecological consequences, such as through the segregation of species both spatially and temporally. It can also have evolutionary consequences, for example; it can impose a selective pressure on the affected species to evolve traits that better distinguish themselves from other species. Causes of reproductive interference Reproductive interference can occur at different stages of mating, from locating a potential mate, to the fertilisation of an individual of a different species. There are seven causes of reproductive interference that each have their own consequences on the fitness of one or both of the involved individuals. Signal jamming Signal jamming refers to t" https://en.wikipedia.org/wiki/Adverse%20food%20reaction,"An adverse food reaction is an adverse response by the body to food or a specific type of food. The most common adverse reaction is a food allergy, which is an adverse immune response to either a specific type or a range of food proteins. However, other adverse responses to food are not allergies. These reactions include responses to food such as food intolerance, pharmacological reactions, and toxin-mediated reactions, as well as physical responses, such as choking." https://en.wikipedia.org/wiki/A%20History%20of%20Mathematical%20Notations,"A History of Mathematical Notations is a book on the history of mathematics and of mathematical notation. It was written by Swiss-American historian of mathematics Florian Cajori (1859–1930), and originally published as a two-volume set by the Open Court Publishing Company in 1928 and 1929, with the subtitles Volume I: Notations in Elementary Mathematics (1928) and Volume II: Notations Mainly in Higher Mathematics (1929). Although Open Court republished it in a second edition in 1974, it was unchanged from the first edition. In 1993, it was published as an 820-page single volume edition by Dover Publications, with its original pagination unchanged. The Basic Library List Committee of the Mathematical Association of America has listed this book as essential for inclusion in undergraduate mathematics libraries. It was already described as long-awaited at the time of its publication, and by 2013, when the Dover edition was reviewed by Fernando Q. Gouvêa, he wrote that it was ""one of those books so well known that it doesn’t need a review"". However, some of its claims on the history of the notations it describes have been subsumed by more recent research, and its coverage of modern mathematics is limited, so it should be used with care as a reference. Topics The first volume of the book concerns elementary mathematics. It has 400 pages of material on arithmetic. This includes the history of notation for numbers from many ancient cultures, arranged by culture, with the Hindu–Arabic numeral system treated separately. Following this, it covers notation for arithmetic operations, arranged separately by operation and by the mathematicians who used those notations (although not in a strict chronological ordering). The first volume concludes with 30 pages on elementary geometry, including also the struggle between symbolists and rhetoricians in the 18th and 19th centuries on whether to express mathematics in notation or words, respectively. The second volume is divided more" https://en.wikipedia.org/wiki/Outline%20of%20algebraic%20structures,"In mathematics, there are many types of algebraic structures which are studied. Abstract algebra is primarily the study of specific algebraic structures and their properties. Algebraic structures may be viewed in different ways, however the common starting point of algebra texts is that an algebraic object incorporates one or more sets with one or more binary operations or unary operations satisfying a collection of axioms. Another branch of mathematics known as universal algebra studies algebraic structures in general. From the universal algebra viewpoint, most structures can be divided into varieties and quasivarieties depending on the axioms used. Some axiomatic formal systems that are neither varieties nor quasivarieties, called nonvarieties, are sometimes included among the algebraic structures by tradition. Concrete examples of each structure will be found in the articles listed. Algebraic structures are so numerous today that this article will inevitably be incomplete. In addition to this, there are sometimes multiple names for the same structure, and sometimes one name will be defined by disagreeing axioms by different authors. Most structures appearing on this page will be common ones which most authors agree on. Other web lists of algebraic structures, organized more or less alphabetically, include Jipsen and PlanetMath. These lists mention many structures not included below, and may present more information about some structures than is presented here. Study of algebraic structures Algebraic structures appear in most branches of mathematics, and one can encounter them in many different ways. Beginning study: In American universities, groups, vector spaces and fields are generally the first structures encountered in subjects such as linear algebra. They are usually introduced as sets with certain axioms. Advanced study: Abstract algebra studies properties of specific algebraic structures. Universal algebra studies algebraic structures abstractly, r" https://en.wikipedia.org/wiki/Systems%20design,"Systems design interfaces, and data for an electronic control system to satisfy specified requirements. System design could be seen as the application of system theory to product development. There is some overlap with the disciplines of system analysis, system architecture and system engineering. Overview If the broader topic of product development ""blends the perspective of marketing, design, and manufacturing into a single approach to product development,"" then design is the act of taking the marketing information and creating the design of the product to be manufactured. Systems design is therefore the process of defining and developing systems to satisfy specified requirements of the user. The basic study of system design is the understanding of component parts and their subsequent interaction with one another. Physical design The physical design relates to the actual input and output processes of the system. This is explained in terms of how data is input into a system, how it is verified/authenticated, how it is processed, and how it is displayed. In physical design, the following requirements about the system are decided. Input requirement, Output requirements, Storage requirements, Processing requirements, System control and backup or recovery. Put another way, the physical portion of system design can generally be broken down into three sub-tasks: User Interface Design Data Design Process Design Web System design Online websites, such as Google, Twitter, Facebook, Amazon and Netflix are used by millions of users worldwide. A scalable, highly available system must be designed to accommodate an increasing number of users. Here are the things to consider in designing the system: Functional and non functional requirements Capacity estimation Database to use, Relational or NoSQL Vertical scaling, Horizontal scaling, Sharding Load Balancing Primary-secondary Replication Cache and CDN Stateless and Stateful servers Data center georouting" https://en.wikipedia.org/wiki/Time-stretch%20analog-to-digital%20converter,"The time-stretch analog-to-digital converter (TS-ADC), also known as the time-stretch enhanced recorder (TiSER), is an analog-to-digital converter (ADC) system that has the capability of digitizing very high bandwidth signals that cannot be captured by conventional electronic ADCs. Alternatively, it is also known as the photonic time-stretch (PTS) digitizer, since it uses an optical frontend. It relies on the process of time-stretch, which effectively slows down the analog signal in time (or compresses its bandwidth) before it can be digitized by a standard electronic ADC. Background There is a huge demand for very high-speed analog-to-digital converters (ADCs), as they are needed for test and measurement equipment in laboratories and in high speed data communications systems. Most of the ADCs are based purely on electronic circuits, which have limited speeds and add a lot of impairments, limiting the bandwidth of the signals that can be digitized and the achievable signal-to-noise ratio. In the TS-ADC, this limitation is overcome by time-stretching the analog signal, which effectively slows down the signal in time prior to digitization. By doing so, the bandwidth (and carrier frequency) of the signal is compressed. Electronic ADCs that would have been too slow to digitize the original signal can now be used to capture and process this slowed down signal. Operation principle The time-stretch processor, which is generally an optical frontend, stretches the signal in time. It also divides the signal into multiple segments using a filter, for example, a wavelength-division multiplexing (WDM) filter, to ensure that the stretched replica of the original analog signal segments do not overlap each other in time after stretching. The time-stretched and slowed down signal segments are then converted into digital samples by slow electronic ADCs. Finally, these samples are collected by a digital signal processor (DSP) and rearranged in a manner such that output data is the" https://en.wikipedia.org/wiki/Moisture%20sensitivity%20level,"Moisture sensitivity level (MSL) is a rating that shows a device's susceptibility to damage due to absorbed moisture when subjected to reflow soldering as defined in J-STD-020. It relates to the packaging and handling precautions for some semiconductors. The MSL is an electronic standard for the time period in which a moisture sensitive device can be exposed to ambient room conditions (30 °C/85%RH at Level 1; 30 °C/60%RH at all other levels). Increasingly, semiconductors have been manufactured in smaller sizes. Components such as thin fine-pitch devices and ball grid arrays could be damaged during SMT reflow when moisture trapped inside the component expands. The expansion of trapped moisture can result in internal separation (delamination) of the plastic from the die or lead-frame, wire bond damage, die damage, and internal cracks. Most of this damage is not visible on the component surface. In extreme cases, cracks will extend to the component surface. In the most severe cases, the component will bulge and pop. This is known as the ""popcorn"" effect. This occurs when part temperature rises rapidly to a high maximum during the soldering (assembly) process. This does not occur when part temperature rises slowly and to a low maximum during a baking (preheating) process. Moisture sensitive devices are packaged in a moisture barrier antistatic bag with a desiccant and a moisture indicator card which is sealed. Moisture sensitivity levels are specified in technical standard IPC/JEDEC Moisture/reflow Sensitivity Classification for Nonhermetic Surface-Mount Devices. The times indicate how long components can be outside of dry storage before they have to be baked to remove any absorbed moisture. MSL 6 – Mandatory bake before use MSL 5A – 24 hours MSL 5 – 48 hours MSL 4 – 72 hours MSL 3 – 168 hours MSL 2A – 4 weeks MSL 2 – 1 year MSL 1 – Unlimited floor life Practical MSL-specified parts must be baked before assembly if their exposure has exceeded the r" https://en.wikipedia.org/wiki/Location%20arithmetic,"Location arithmetic (Latin arithmeticae localis) is the additive (non-positional) binary numeral systems, which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard-like grid. Napier's terminology, derived from using the positions of counters on the board to represent numbers, is potentially misleading because the numbering system is, in facts, non-positional in current vocabulary. During Napier's time, most of the computations were made on boards with tally-marks or jetons. So, unlike how it may be seen by the modern reader, his goal was not to use moves of counters on a board to multiply, divide and find square roots, but rather to find a way to compute symbolically with pen and paper. However, when reproduced on the board, this new technique did not require mental trial-and-error computations nor complex carry memorization (unlike base 10 computations). He was so pleased by his discovery that he said in his preface: Location numerals Binary notation had not yet been standardized, so Napier used what he called location numerals to represent binary numbers. Napier's system uses sign-value notation to represent numbers; it uses successive letters from the Latin alphabet to represent successive powers of two: a = 20 = 1, b = 21 = 2, c = 22 = 4, d = 23 = 8, e = 24 = 16 and so on. To represent a given number as a location numeral, that number is expressed as a sum of powers of two and then each power of two is replaced by its corresponding digit (letter). For example, when converting from a decimal numeral: 87 = 1 + 2 + 4 + 16 + 64 = 20 + 21 + 22 + 24 + 26 = abceg Using the reverse process, a location numeral can be converted to another numeral system. For example, when converting to a decimal numeral: abdgkl = 20 + 21 + 23 + 26 + 210 + 211 = 1 + 2 + 8 + 64 + 1024 + 2048 = 3147 Napier showed multiple methods of converting numbers in and out of his numeral system. These methods are similar t" https://en.wikipedia.org/wiki/Network%20Coordinate%20System,"A Network Coordinate System (NC system) is a system for predicting characteristics such as the latency or bandwidth of connections between nodes in a network by assigning coordinates to nodes. More formally, It assigns a coordinate embedding to each node in a network using an optimization algorithm such that a predefined operation estimates some directional characteristic of the connection between node and . Uses In general, Network Coordinate Systems can be used for peer discovery, optimal-server selection, and characteristic-aware routing. Latency Optimization When optimizing for latency as a connection characteristic i.e. for low-latency connections, NC systems can potentially help improve the quality of experience for many different applications such as: Online Games Forming game groups such that all the players are close to each other and thus have a smoother overall experience. Choosing servers as close to as many players in a given multiplayer game as possible. Automatically routing game packets through different servers so as to minimize the total latency between players who are actively interacting with each other in the game map. Content delivery networks Directing a user to the closest server that can handle a request to minimize latency. Voice over IP Automatically switch relay servers based on who is talking in a few-to-many or many-to-many voice chat to minimize latency between active participants. Peer-to-peer networks Can use the latency-predicting properties of NC systems to do a wide variety of routing optimizations in peer-to-peer networks. Onion routing networks Choose relays such as to minimize the total round trip delay to allow for a more flexible tradeoff between performance and anonymity. Physical positioning Latency correlates with the physical distances between computers in the real world. Thus, NC systems that model latency may be able to aid in locating the approximate physical area a computer resides in. Bandwid" https://en.wikipedia.org/wiki/Radio%20access%20technology,"A radio access technology (RAT) is the underlying physical connection method for a radio communication network. Many modern mobile phones support several RATs in one device such as Bluetooth, Wi-Fi, and GSM, UMTS, LTE or 5G NR. The term RAT was traditionally used in mobile communication network interoperability. More recently, the term RAT is used in discussions of heterogeneous wireless networks. The term is used when a user device selects between the type of RAT being used to connect to the Internet. This is often performed similar to access point selection in IEEE 802.11 (Wi-Fi) based networks. Inter-RAT (IRAT) handover A mobile terminal, while connected using a RAT, performs neighbour cell measurements and sends measurement report to the network. Based on this measurement report provided by the mobile terminal, the network can initiate handover from one RAT to another, e.g. from WCDMA to GSM or vice versa. Once the handover with the new RAT is completed, the channels used by the previous RAT are released. See also Radio access network (RAN)" https://en.wikipedia.org/wiki/P4%20%28programming%20language%29,"P4 is a programming language for controlling packet forwarding planes in networking devices, such as routers and switches. In contrast to a general purpose language such as C or Python, P4 is a domain-specific language with a number of constructs optimized for network data forwarding. P4 is distributed as open-source, permissively licensed code, and is maintained by the P4 Project (formerly the P4 Language Consortium), a not-for-profit organization hosted by the Open Networking Foundation. History P4 was originally described in a 2014 SIGCOMM CCR paper titled “Programming Protocol-Independent Packet Processors”—the alliterative name shortens to ""P4"". The first P4 workshop took place in June 2015 at Stanford University. An updated specification of P4, called P4-16, was released between 2016 and 2017, replacing P4-14, the original specification of P4. Design As the language is specifically targeted at packet forwarding applications, the list of requirements or design choices is somewhat specific to those use cases. The language is designed to meet several goals: Target independence P4 programs are designed to be implementation-independent: they can be compiled against many different types of execution machines such as general-purpose CPUs, FPGAs, system(s)-on-chip, network processors, and ASICs. These different types of machines are known as P4 targets, and each target must be provided along with a compiler that maps the P4 source code into a target switch model. The compiler may be embedded in the target device, an externally running software, or even a cloud service. As many of the initial targets for P4 programs were used for simple packet switching it is very common to hear the term ""P4 switch"" used, even though ""P4 target"" is more formally correct. Protocol independence P4 is designed to be protocol-independent: the language has no native support for even common protocols such as IP, Ethernet, TCP, VxLAN, or MPLS. Instead, the P4 programmer describes the" https://en.wikipedia.org/wiki/Standard%20Commands%20for%20Programmable%20Instruments,"The Standard Commands for Programmable Instruments (SCPI; often pronounced ""skippy"") defines a standard for syntax and commands to use in controlling programmable test and measurement devices, such as automatic test equipment and electronic test equipment. Overview SCPI was defined as an additional layer on top of the specification ""Standard Codes, Formats, Protocols, and Common Commands"". The standard specifies a common syntax, command structure, and data formats, to be used with all instruments. It introduced generic commands (such as CONFigure and MEASure) that could be used with any instrument. These commands are grouped into subsystems. SCPI also defines several classes of instruments. For example, any controllable power supply would implement the same DCPSUPPLY base functionality class. Instrument classes specify which subsystems they implement, as well as any instrument-specific features. The physical hardware communications link is not defined by SCPI. While it was originally created for the IEEE-488.1 (GPIB) bus, SCPI can also be used with RS-232, RS-422, Ethernet, USB, VXIbus, HiSLIP, etc. SCPI commands are ASCII textual strings, which are sent to the instrument over the physical layer (e.g., IEEE-488.1). Commands are a series of one or more keywords, many of which take parameters. In the specification, keywords are written CONFigure: The entire keyword can be used, or it can be abbreviated to just the uppercase portion. Responses to query commands are typically ASCII strings. However, for bulk data, binary formats can be used. The SCPI specification consists of four volumes: Volume 1: ""Syntax and Style"", Volume 2: ""Command Reference"", Volume 3: ""Data Interchange Format"", Volume 4: ""Instrument Classes"". The specification was originally released as non-free printed manuals, then later as a free PDF file. SCPI history First released in 1990, SCPI originated as an additional layer for IEEE-488. IEEE-488.1 specified the physical and electrical bus, and" https://en.wikipedia.org/wiki/Log%20management,"Log management (LM) comprises an approach to dealing with large volumes of computer-generated log messages (also known as audit records, audit trails, event-logs, etc.). Log management generally covers: Log collection Centralized log aggregation Long-term log storage and retention Log rotation Log analysis (in real-time and in bulk after storage) Log search and reporting. Overview The primary drivers for log management implementations are concerns about security, system and network operations (such as system or network administration) and regulatory compliance. Logs are generated by nearly every computing device, and can often be directed to different locations both on a local file system or remote system. Effectively analyzing large volumes of diverse logs can pose many challenges, such as: Volume: log data can reach hundreds of gigabytes of data per day for a large organization. Simply collecting, centralizing and storing data at this volume can be challenging. Normalization: logs are produced in multiple formats. The process of normalization is designed to provide a common output for analysis from diverse sources. Velocity: The speed at which logs are produced from devices can make collection and aggregation difficult Veracity: Log events may not be accurate. This is especially problematic for systems that perform detection, such as intrusion detection systems. Users and potential users of log management may purchase complete commercial tools or build their own log-management and intelligence tools, assembling the functionality from various open-source components, or acquire (sub-)systems from commercial vendors. Log management is a complicated process and organizations often make mistakes while approaching it. Logging can produce technical information usable for the maintenance of applications or websites. It can serve: to define whether a reported bug is actually a bug to help analyze, reproduce and solve bugs to help test new features i" https://en.wikipedia.org/wiki/Symbols%20of%20grouping,"In mathematics and related subjects, understanding a mathematical expression depends on an understanding of symbols of grouping, such as parentheses (), brackets [], and braces {}. These same symbols are also used in ways where they are not symbols of grouping. For example, in the expression 3(x+y) the parentheses are symbols of grouping, but in the expression (3, 5) the parentheses may indicate an open interval. The most common symbols of grouping are the parentheses and the brackets, and the brackets are usually used to avoid too many repeated parentheses. For example, to indicate the product of binomials, parentheses are usually used, thus: . But if one of the binomials itself contains parentheses, as in one or more pairs of parentheses may be replaced by brackets, thus: . Beyond elementary mathematics, brackets are mostly used for other purposes, e.g. to denote a closed interval, or an equivalence class, so they appear rarely for grouping. The usage of the word ""parentheses"" varies from country to country. In the United States, the word parentheses (singular ""parenthesis"") is used for the curved symbol of grouping, but in many other countries the curved symbol of grouping is called a ""bracket"" and the symbol of grouping with two right angles joined is called a ""square bracket"". The symbol of grouping knows as ""braces"" has two major uses. If two of these symbols are used, one on the left and the mirror image of it on the right, it almost always indicates a set, as in , the set containing three members, , , and . But if it is used only on the left, it groups two or more simultaneous equations. There are other symbols of grouping. One is the bar above an expression, as in the square root sign in which the bar is a symbol of grouping. For example is the square root of the sum. The bar is also a symbol of grouping in repeated decimal digits. A decimal point followed by one or more digits with a bar over them, for example 0., represents the repeating decimal 0.1" https://en.wikipedia.org/wiki/Gauss%27s%20Pythagorean%20right%20triangle%20proposal,"Gauss's Pythagorean right triangle proposal is an idea attributed to Carl Friedrich Gauss for a method to signal extraterrestrial beings by constructing an immense right triangle and three squares on the surface of the Earth. The shapes would be a symbolic representation of the Pythagorean theorem, large enough to be seen from the Moon or Mars. Although credited in numerous sources as originating with Gauss, with exact details of the proposal set out, the specificity of detail, and even whether Gauss made the proposal, have been called into question. Many of the earliest sources do not actually name Gauss as the originator, instead crediting a ""German astronomer"" or using other nonspecific descriptors, and in some cases naming a different author entirely. The details of the proposal also change significantly upon different retellings. Nevertheless, Gauss's writings reveal a belief and interest in finding a method to contact extraterrestrial life, and that he did, at the least, propose using amplified light using a heliotrope, his own 1818 invention, to signal supposed inhabitants of the Moon. Proposal Carl Friedrich Gauss is credited with an 1820 proposal for a method to signal extraterrestrial beings in the form of drawing an immense right triangle and three squares on the surface of the Earth, intended as a symbolical representation of the Pythagorean theorem, large enough to be seen from the Moon or Mars. Details vary between sources, but typically the ""drawing"" was to be constructed on the Siberian tundra, and made up of vast strips of pine forest forming the right triangle's borders, with the interior of the drawing and exterior squares composed of fields of wheat. Gauss is said to have been convinced that Mars harbored intelligent life and that this geometric figure, invoking the Pythagorean theorem through the squares on the outside borders (sometimes called a ""windmill diagram"", as originated by Euclid), would demonstrate to such alien observers the recipr" https://en.wikipedia.org/wiki/Navigation%20mesh,"A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000. Description A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph. Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment. Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the ""true"" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can. Creation Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh. It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be crea" https://en.wikipedia.org/wiki/Photoperiodism,"Photoperiodism is the physiological reaction of organisms to the length of night or a dark period. It occurs in plants and animals. Plant photoperiodism can also be defined as the developmental responses of plants to the relative lengths of light and dark periods. They are classified under three groups according to the photoperiods: short-day plants, long-day plants, and day-neutral plants. In animals photoperiodism (sometimes called seasonality) is the suite of physiological changes that occur in response to changes in day length. This allows animals to respond to a temporally changing environment associated with changing seasons as the earth orbits the sun. Plants Many flowering plants (angiosperms) use a circadian rhythm together with photoreceptor protein, such as phytochrome or cryptochrome, to sense seasonal changes in night length, or photoperiod, which they take as signals to flower. In a further subdivision, obligate photoperiodic plants absolutely require a long or short enough night before flowering, whereas facultative photoperiodic plants are more likely to flower under one condition. Phytochrome comes in two forms: Pr and Pfr. Red light (which is present during the day) converts phytochrome to its active form (Pfr) which then stimulates various processes such as germination, flowering or branching. In comparison, plants receive more far-red in the shade, and this converts phytochrome from Pfr to its inactive form, Pr, inhibiting germination. This system of Pfr to Pr conversion allows the plant to sense when it is night and when it is day. Pfr can also be converted back to Pr by a process known as dark reversion, where long periods of darkness trigger the conversion of Pfr. This is important in regards to plant flowering. Experiments by Halliday et al. showed that manipulations of the red-to far-red ratio in Arabidopsis can alter flowering. They discovered that plants tend to flower later when exposed to more red light, proving that red light i" https://en.wikipedia.org/wiki/Bipolar%20transistor%20biasing,"Bipolar transistors must be properly biased to operate correctly. In circuits made with individual devices (discrete circuits), biasing networks consisting of resistors are commonly employed. Much more elaborate biasing arrangements are used in integrated circuits, for example, bandgap voltage references and current mirrors. The voltage divider configuration achieves the correct voltages by the use of resistors in certain patterns. By selecting the proper resistor values, stable current levels can be achieved that vary only little over temperature and with transistor properties such as β. The operating point of a device, also known as bias point, quiescent point, or Q-point, is the point on the output characteristics that shows the DC collector–emitter voltage (Vce) and the collector current (Ic) with no input signal applied. Bias circuit requirements A bias network is selected to stabilize the operating point of the transistor, by reducing the following effects of device variability, temperature, and voltage changes: The gain of a transistor can vary significantly between different batches, which results in widely different operating points for sequential units in serial production or after replacement of a transistor. Due to the Early effect, the current gain is affected by the collector–emitter voltage. Both gain and base–emitter voltage depend on the temperature. The leakage current also increases with temperature. A bias circuit may be composed of only resistors, or may include elements such as temperature-dependent resistors, diodes, or additional voltage sources, depending on the range of operating conditions expected. Signal requirements For analog operation of a class-A amplifier, the Q-point is placed so the transistor stays in active mode (does not shift to operation in the saturation region or cut-off region) across the input signal's range. Often, the Q-point is established near the center of the active region of a transistor characteristic t" https://en.wikipedia.org/wiki/Viridiplantae,"Viridiplantae (literally ""green plants"") constitute a clade of eukaryotic organisms that comprises approximately 450,000–500,000 species that play important roles in both terrestrial and aquatic ecosystems. They include the green algae, which are primarily aquatic, and the land plants (embryophytes), which emerged from within them. Green algae traditionally excludes the land plants, rendering them a paraphyletic group. However it is accurate to think of land plants as a kind of alga. Since the realization that the embryophytes emerged from within the green algae, some authors are starting to include them. They have cells with cellulose in their cell walls, and primary chloroplasts derived from endosymbiosis with cyanobacteria that contain chlorophylls a and b and lack phycobilins. Corroborating this, a basal phagotroph archaeplastida group has been found in the Rhodelphydia. In some classification systems, the group has been treated as a kingdom, under various names, e.g. Viridiplantae, Chlorobionta, or simply Plantae, the latter expanding the traditional plant kingdom to include the green algae. Adl et al., who produced a classification for all eukaryotes in 2005, introduced the name Chloroplastida for this group, reflecting the group having primary chloroplasts with green chlorophyll. They rejected the name Viridiplantae on the grounds that some of the species are not plants, as understood traditionally. The Viridiplantae are made up of two clades: Chlorophyta and Streptophyta as well as the basal Mesostigmatophyceae and Chlorokybophyceae. Together with Rhodophyta and glaucophytes, Viridiplantae are thought to belong to a larger clade called Archaeplastida or Primoplantae. Phylogeny and classification Simplified phylogeny of the Viridiplantae, according to Leliaert et al. 2012. Viridiplantae Chlorophyta core chlorophytes Ulvophyceae Cladophorales Dasycladales Bryopsidales Trentepohliales Ulvales-Ulotrichales Oltmannsiellopsidales Chlorophyceae Oedogoniales Chae" https://en.wikipedia.org/wiki/Roshd%20Biological%20Education,"Roshd Biological Education is a quarterly science educational magazine covering recent developments in biology and biology education for a biology teacher Persian -speaking audience. Founded in 1985, it is published by The Teaching Aids Publication Bureau, Organization for Educational Planning and Research, Ministry of Education, Iran. Roshd Biological Education has an editorial board composed of Iranian biologists, experts in biology education, science journalists and biology teachers. It is read by both biology teachers and students, as a way of launching innovations and new trends in biology education, and helping biology teachers to teach biology in better and more effective ways. Magazine layout As of Autumn 2012, the magazine is laid out as follows: Editorial—often offering a view of point from editor in chief on an educational and/or biological topics. Explore— New research methods and results on biology and/or education. World— Reports and explores on biological education worldwide. In Brief—Summaries of research news and discoveries. Trends—showing how new technology is altering the way we live our lives. Point of View—Offering personal commentaries on contemporary topics. Essay or Interview—often with a pioneer of a biological and/or educational researcher or an influential scientific educational leader. Muslim Biologists—Short histories of Muslim Biologists. Environment—An article on Iranian environment and its problems. News and Reports—Offering short news and reports events on biology education. In Brief—Short articles explaining interesting facts. Questions and Answers—Questions about biology concepts and their answers. Book and periodical Reviews—About new publication on biology and/or education. Reactions—Letter to the editors. Editorial staff Mohammad Karamudini, editor in chief History Roshd Biological Education started in 1985 together with many other magazines in other science and art. The first editor was Dr. Nouri-Dalooi, th" https://en.wikipedia.org/wiki/Bletting,"Bletting is a process of softening that certain fleshy fruits undergo, beyond ripening. There are some fruits that are either sweeter after some bletting, such as sea buckthorn, or for which most varieties can be eaten raw only after bletting, such as medlars, persimmons, quince, service tree fruit, and wild service tree fruit (popularly known as chequers). The rowan or mountain ash fruit must be bletted and cooked to be edible, to break down the toxic parasorbic acid (hexenollactone) into sorbic acid. History The English verb to blet was coined by John Lindley, in his Introduction to Botany (1835). He derived it from the French poire blette meaning 'overripe pear'. ""After the period of ripeness"", he wrote, ""most fleshy fruits undergo a new kind of alteration; their flesh either rots or blets."" In Shakespeare's Measure for Measure, he alluded to bletting when he wrote (IV. iii. 167) ""They would have married me to the rotten Medler."" Thomas Dekker also draws a similar comparison in his play The Honest Whore: ""I scarce know her, for the beauty of her cheek hath, like the moon, suffered strange eclipses since I beheld it: women are like medlars – no sooner ripe but rotten."" Elsewhere in literature, D. H. Lawrence dubbed medlars ""wineskins of brown morbidity."" There is also an old saying, used in Don Quixote, that ""time and straw make medlars ripe"", referring to the bletting process. Process Chemically speaking, bletting brings about an increase in sugars and a decrease in the acids and tannins that make the unripe fruit astringent. Ripe medlars, for example, are taken from the tree, placed somewhere cool, and allowed to further ripen for several weeks. In Trees and Shrubs, horticulturist F. A. Bush wrote about medlars that ""if the fruit is wanted it should be left on the tree until late October and stored until it appears in the first stages of decay; then it is ready for eating. More often the fruit is used for making jelly."" Ideally, the fruit should be harve" https://en.wikipedia.org/wiki/Gurzadyan%20theorem,"In cosmology, Gurzadyan theorem, proved by Vahe Gurzadyan, states the most general functional form for the force satisfying the condition of identity of the gravity of the sphere and of a point mass located in the sphere's center. This theorem thus refers to the first statement of Isaac Newton’s shell theorem (the identity mentioned above) but not the second one, namely, the absence of gravitational force inside a shell. The theorem had entered, for example, in physics manual website and its importance for cosmology outlined in several papers as well as in shell theorem. The formula and the cosmological constant The formula for the force derived in has the form where and are constants. The first term is the familiar law of universal gravitation, the second one corresponds to the cosmological constant term in general relativity and McCrea-Milne cosmology. Then the field is force-free only in the center of a shell but the confinement (oscillator) term does not change the initial symmetry of the Newtonian field. Also, this field corresponds to the only field possessing the property of the Newtonian one: the closing of orbits at any negative value of energy, i.e. the coincidence of the period of variation of the value of the radius vector with that of its revolution by (resonance principle) . Consequences: cosmological constant as a physical constant Einstein named the cosmological constant as a universal constant, introducing it to define the static cosmological model. From this theorem the cosmological constant emerges as additional constant of gravity along with the Newton’s gravitational constant . Then, the cosmological constant is dimension independent and matter-uncoupled and hence can be considered even more universal than Newton’s gravitational constant. For joining the set of fundamental constants , the gravitational Newton’s constant, the speed of light and the Planck constant, yields and a dimensionless quantity emerges for the 4-consta" https://en.wikipedia.org/wiki/Bracket%20%28mathematics%29,"In mathematics, brackets of various typographical forms, such as parentheses ( ), square brackets [ ], braces { } and angle brackets ⟨ ⟩, are frequently used in mathematical notation. Generally, such bracketing denotes some form of grouping: in evaluating an expression containing a bracketed sub-expression, the operators in the sub-expression take precedence over those surrounding it. Sometimes, for the clarity of reading, different kinds of brackets are used to express the same meaning of precedence in a single expression with deep nesting of sub-expressions. Historically, other notations, such as the vinculum generally, were similarly used for grouping. In present-day use, these notations all have specific meanings. The earliest use of brackets to indicate aggregation (i.e. grouping) was suggested in 1608 by Christopher Clavius, and in 1629 by Albert Girard. Symbols for representing angle brackets A variety of different symbols are used to represent angle brackets. In e-mail and other ASCII text, it is common to use the less-than (<) and greater-than (>) signs to represent angle brackets, because ASCII does not include angle brackets. Unicode has pairs of dedicated characters; other than less-than and greater-than symbols, these include: and and and and and , which are deprecated In LaTeX the markup is \langle and \rangle: . Non-mathematical angled brackets include: and , used in East-Asian text quotation and , which are dingbats There are additional dingbats with increased line thickness, and some angle quotation marks and deprecated characters. Algebra In elementary algebra, parentheses ( ) are used to specify the order of operations. Terms inside the bracket are evaluated first; hence 2×(3 + 4) is 14, is 2 and (2×3) + 4 is 10. This notation is extended to cover more general algebra involving variables: for example . Square brackets are also often used in place of a second set of parentheses when they are nested—so as to provide a v" https://en.wikipedia.org/wiki/Frenetic%20%28programming%20language%29,"Frenetic is a domain-specific language for programming software-defined networking (SDN). This domain-specific programming language allows network operators, rather than manually configuring each connected network device, to program the network as a whole. Frenetic is designed to solve major OpenFlow/NOX programming problems. In particular, Frenetic introduces a set of purely functional abstractions that enable modular program development, defines high-level, programmer-centric packet-processing operators, and eliminates many of the difficulties of the two-tier programming model by introducing a see-every-packet programming paradigm. Hence Frenetic is a functional reactive programming language operating at a packet level of abstraction." https://en.wikipedia.org/wiki/Unit%20of%20work,"A unit of work is a behavioral pattern in software development. Martin Fowler has defined it as everything one does during a business transaction which can affect the database. When the unit of work is finished it will provide everything that needs to be done to change the database as a result of the work. A unit of work encapsulates one or more code repositories[de] and a list of actions to be performed which are necessary for the successful implementation of self-contained and consistent data change. A unit of work is also responsible for handling concurrency issues, and can be used for transactions and stability patterns.[de] See also ACID (atomicity, consistency, isolation, durability), a set of properties of database transactions Database transaction, a unit of work within a database management system Equi-join, a type of join where only equal signs are used in the join predicate Lossless join decomposition, decomposition of a relation such that a natural join of the resulting relations yields back the original relation" https://en.wikipedia.org/wiki/Journal%20of%20Bioscience%20and%20Bioengineering,"The Journal of Bioscience and Bioengineering is a monthly peer-reviewed scientific journal. The editor-in-chief is Noriho Kamiya (Kyushu University). It is published by The Society for Biotechnology, Japan and distributed outside Japan by Elsevier. It was founded in 1923 as a Japanese-language journal and took its current title in 1999. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.0.15." https://en.wikipedia.org/wiki/Big%20Jake%20%28horse%29,"Big Jake (March 2001 – June 2021) was a red flaxen Belgian gelding horse noted for his extreme height. He stood at tall and weighed . According to the Guinness World Records, Big Jake broke the record for the world's tallest living horse when he was measured in 2010, and he held that record for the remainder of his life. After Sampson at (foaled 1846, in Toddington Mills, Bedfordshire, England), he is the second-tallest horse on record. Big Jake was born in 2001 in the U.S. state of Nebraska, weighing approximately , which is about heavier than is typical for his breed. His parents were normal-sized, and he was tall as a foal, but not exceptionally so. Big Jake was purchased by a relative of his eventual owner Jerry Gilbert, who took ownership when it became apparent that the horse would become very large and require special accommodation. Gilbert kept Big Jake at Smokey Hollow Farm, near Poynette, Wisconsin, feeding him two to three buckets of grain and a whole bale of hay daily. His stall was almost twice the size of that for a regular horse and he was transported in semi-trailers due to his size. Big Jake competed in draft horse showing competitions before retiring in 2013, and made regular appearances at the Wisconsin State Fair. Visitors to the farm were offered barn tours, which included meeting Big Jake. Big Jake's death was announced by Smokey Hollow Farm on June 27, 2021, with Gilbert's wife stating that the death had taken place approximately two weeks prior but declining to give the media an exact date. Jerry Gilbert hailed Big Jake as a ""gentle giant"", and stated that he intended to keep his stall empty as a memorial. Explanatory notes" https://en.wikipedia.org/wiki/Nichols%20plot,"The Nichols plot is a plot used in signal processing and control design, named after American engineer Nathaniel B. Nichols. Use in control design Given a transfer function, with the closed-loop transfer function defined as, the Nichols plots displays versus . Loci of constant and are overlaid to allow the designer to obtain the closed loop transfer function directly from the open loop transfer function. Thus, the frequency is the parameter along the curve. This plot may be compared to the Bode plot in which the two inter-related graphs - versus and versus ) - are plotted. In feedback control design, the plot is useful for assessing the stability and robustness of a linear system. This application of the Nichols plot is central to the quantitative feedback theory (QFT) of Horowitz and Sidi, which is a well known method for robust control system design. In most cases, refers to the phase of the system's response. Although similar to a Nyquist plot, a Nichols plot is plotted in a Cartesian coordinate system while a Nyquist plot is plotted in a Polar coordinate system. See also Hall circles Bode plot Nyquist plot Transfer function" https://en.wikipedia.org/wiki/Context-aware%20pervasive%20systems,"Context-aware computing refers to a general class of mobile systems that can sense their physical environment, and adapt their behavior accordingly. Three important aspects of context are: where you are; who you are with; and what resources are nearby. Although location is a primary capability, location-aware does not necessarily capture things of interest that are mobile or changing. Context-aware in contrast is used more generally to include nearby people, devices, lighting, noise level, network availability, and even the social situation, e.g., whether you are with your family or a friend from school. History The concept emerged from ubiquitous computing research at Xerox PARC and elsewhere in the early 1990s. The term 'context-aware' was first used by Schilit and Theimer in their 1994 paper Disseminating Active Map Information to Mobile Hosts where they describe a model of computing in which users interact with many different mobile and stationary computers and classify a context-aware systems as one that can adapt according to its location of use, the collection of nearby people and objects, as well as the changes to those objects over time over the course of the day. See also Ambient intelligence Context awareness Differentiated service (design pattern) Locative Media" https://en.wikipedia.org/wiki/Invariant%20%28mathematics%29,"In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases ""invariant under"" and ""invariant to"" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class. Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects. Examples A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting. An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change. The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication. Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios a" https://en.wikipedia.org/wiki/Recursive%20filter,"In signal processing, a recursive filter is a type of filter which reuses one or more of its outputs as an input. This feedback typically results in an unending impulse response (commonly referred to as infinite impulse response (IIR)), characterised by either exponentially growing, decaying, or sinusoidal signal output components. However, a recursive filter does not always have an infinite impulse response. Some implementations of moving average filter are recursive filters but with a finite impulse response. Non-recursive Filter Example: y[n] = 0.5x[n − 1] + 0.5x[n]. Recursive Filter Example: y[n] = 0.5y[n − 1] + 0.5x[n]. Examples of recursive filters Kalman filter Signal processing Weblinks IIR Filter Design auf Google Play Store" https://en.wikipedia.org/wiki/Pregnancy%20over%20age%2050,"Pregnancy over the age of 50 has become possible for more women due to advances in assisted reproductive technology, in particular egg donation. Typically, a woman's fecundity ends with menopause, which, by definition, is 12 consecutive months without having had any menstrual flow at all. During perimenopause, the menstrual cycle and the periods become irregular and eventually stop altogether. The female biological clock can vary greatly from woman to woman. A woman's individual level of fertility can be tested through a variety of methods. In the United States, between 1997 and 1999, 539 births were reported among mothers over age 50 (four per 100,000 births), with 194 being over 55. The oldest recorded mother to date to conceive was 73 years. According to statistics from the Human Fertilisation and Embryology Authority, in the UK more than 20 babies are born to women over age 50 per year through in vitro fertilization with the use of donor oocytes (eggs). Maria del Carmen Bousada de Lara formerly held the record of oldest verified mother; she was aged 66 years 358 days when she gave birth to twins; she was 130 days older than Adriana Iliescu, who gave birth in 2005 to a baby girl. In both cases, the children were conceived through IVF with donor eggs. The oldest verified mother to conceive naturally (listed currently in the Guinness Records) is Dawn Brooke (Guernsey); she conceived a son at the age of 59 in 1997. Erramatti Mangamma currently holds the record for being the oldest living mother who gave birth at the age of 73 through in-vitro fertilisation via caesarean section in the city of Hyderabad, India. She delivered twin baby girls, making her also the oldest mother to give birth to twins. The previous record for being the oldest living mother was held by Daljinder Kaur Gill from Amritsar, India who gave birth to a baby boy at age 72 through in-vitro fertilisation. Age considerations Menopause typically occurs between 44 and 58 years of age. DNA testi" https://en.wikipedia.org/wiki/List%20of%20limits,"This is a list of limits for common functions such as elementary functions. In this article, the terms a, b and c are constants with respect to x. Limits for general functions Definitions of limits and related concepts if and only if This is the (ε, δ)-definition of limit. The limit superior and limit inferior of a sequence are defined as and . A function, , is said to be continuous at a point, c, if Operations on a single known limit If then: if L is not equal to 0. if n is a positive integer if n is a positive integer, and if n is even, then L > 0. In general, if g(x) is continuous at L and then Operations on two known limits If and then: Limits involving derivatives or infinitesimal changes In these limits, the infinitesimal change is often denoted or . If is differentiable at , . This is the definition of the derivative. All differentiation rules can also be reframed as rules involving limits. For example, if g(x) is differentiable at x, . This is the chain rule. . This is the product rule. If and are differentiable on an open interval containing c, except possibly c itself, and , L'Hôpital's rule can be used: Inequalities If for all x in an interval that contains c, except possibly c itself, and the limit of and both exist at c, then If and for all x in an open interval that contains c, except possibly c itself, This is known as the squeeze theorem. This applies even in the cases that f(x) and g(x) take on different values at c, or are discontinuous at c. Polynomials and functions of the form xa Polynomials in x if n is a positive integer In general, if is a polynomial then, by the continuity of polynomials, This is also true for rational functions, as they are continuous on their domains. Functions of the form xa In particular, . In particular, Exponential functions Functions of the form ag(x) , due to the continuity of Functions of the form xg(x) Functions of the form f(x)g(x) . This limit can be deriv" https://en.wikipedia.org/wiki/ClearSpeed,"ClearSpeed Technology Ltd was a semiconductor company, formed in 2002 to develop enhanced SIMD processors for use in high-performance computing and embedded systems. Based in Bristol, UK, the company has been selling its processors since 2005. Its current 192-core CSX700 processor was released in 2008, but a lack of sales has forced the company to downsize and it has since delisted from the London stock exchange. Products The CSX700 processor consists of two processing arrays, each with 96 processing elements. The processing elements each contain a 32/64-bit floating point multiplier, a 32/64-bit floating point adder, 6 KB of SRAM, an integer arithmetic logic unit, and a 16-bit integer multiply–accumulate unit. It currently sells its CSX700 processor on a PCI Express expansion card with 2 GB of memory, called the Advance e710. The card is supplied with the ClearSpeed Software Development Kit and application libraries. Related multi-core architectures include Ambric, PicoChip, Cell BE, Texas Memory Systems, and GPGPU stream processors such as AMD FireStream and Nvidia Tesla. ClearSpeed competes with AMD and Nvidia in the hardware acceleration market, where computationally intensive applications offload tasks to the accelerator. As of 2009, only the ClearSpeed e710 performs 64-bit arithmetic at its peak computational rate. History In November 2003 ClearSpeed demonstrated the CS301, with 64 processing elements running at 200 MHz, and peak 25.6 FP32 GFLOPS. In June 2005 ClearSpeed demonstrated the CSX600, with 96 processing elements running at 210 MHz, capable of 40 GFLOPS. In September 2005 John Gustafson joined ClearSpeed as CTO of high performance computing. In November 2005 ClearSpeed made its first significant sale of CSX600 processors to the Tokyo Institute of Technology using X620 Advance cards. In November 2006 ClearSpeed X620 Advance cards helped place the Tsubame cluster 7th in the TOP500 list of supercomputers. The cards continue to be used in 2009. " https://en.wikipedia.org/wiki/Girsanov%20theorem,"In probability theory, the Girsanov theorem tells how stochastic processes change under changes in measure. The theorem is especially important in the theory of financial mathematics as it tells how to convert from the physical measure, which describes the probability that an underlying instrument (such as a share price or interest rate) will take a particular value or values, to the risk-neutral measure which is a very useful tool for evaluating the value of derivatives on the underlying. History Results of this type were first proved by Cameron-Martin in the 1940s and by Igor Girsanov in 1960. They have been subsequently extended to more general classes of process culminating in the general form of Lenglart (1977). Significance Girsanov's theorem is important in the general theory of stochastic processes since it enables the key result that if Q is a measure that is absolutely continuous with respect to P then every P-semimartingale is a Q-semimartingale. Statement of theorem We state the theorem first for the special case when the underlying stochastic process is a Wiener process. This special case is sufficient for risk-neutral pricing in the Black–Scholes model. Let be a Wiener process on the Wiener probability space . Let be a measurable process adapted to the natural filtration of the Wiener process ; we assume that the usual conditions have been satisfied. Given an adapted process define where is the stochastic exponential of X with respect to W, i.e. and denotes the quadratic variation of the process X. If is a martingale then a probability measure Q can be defined on such that Radon–Nikodym derivative Then for each t the measure Q restricted to the unaugmented sigma fields is equivalent to P restricted to Furthermore if is a local martingale under P then the process is a Q local martingale on the filtered probability space . Corollary If X is a continuous process and W is Brownian motion under measure P then is Brownian motion" https://en.wikipedia.org/wiki/List%20of%20dualities,"– Mathematics In mathematics, a duality, generally speaking, translates concepts, theorems or mathematical structures into other concepts, theorems or structures, in a one-to-one fashion, often (but not always) by means of an involution operation: if the dual of A is B, then the dual of B is A. Alexander duality Alvis–Curtis duality Artin–Verdier duality Beta-dual space Coherent duality De Groot dual Dual abelian variety Dual basis in a field extension Dual bundle Dual curve Dual (category theory) Dual graph Dual group Dual object Dual pair Dual polygon Dual polyhedron Dual problem Dual representation Dual q-Hahn polynomials Dual q-Krawtchouk polynomials Dual space Dual topology Dual wavelet Duality (optimization) Duality (order theory) Duality of stereotype spaces Duality (projective geometry) Duality theory for distributive lattices Dualizing complex Dualizing sheaf Eckmann–Hilton duality Esakia duality Fenchel's duality theorem Hodge dual Jónsson–Tarski duality Lagrange duality Langlands dual Lefschetz duality Local Tate duality Opposite category Poincaré duality Twisted Poincaré duality Poitou–Tate duality Pontryagin duality S-duality (homotopy theory) Schur–Weyl duality Series-parallel duality Serre duality Spanier–Whitehead duality Stone's duality Tannaka–Krein duality Verdier duality Grothendieck local duality Philosophy and religion Dualism (philosophy of mind) Epistemological dualism Dualistic cosmology Soul dualism Yin and yang Engineering Duality (electrical circuits) Duality (mechanical engineering) Observability/Controllability in control theory Physics Complementarity (physics) Dual resonance model Duality (electricity and magnetism) Englert–Greenberger duality relation Holographic duality Kramers–Wannier duality Mirror symmetry 3D mirror symmetry Montonen–Olive duality Mysterious duality (M-theory) Seiberg duality String duality S-duality T-duality U-duality Wave–par" https://en.wikipedia.org/wiki/Outline%20of%20logic,"Logic is the formal science of using reason and is considered a branch of both philosophy and mathematics and to a lesser extent computer science. Logic investigates and classifies the structure of statements and arguments, both through the study of formal systems of inference and the study of arguments in natural language. The scope of logic can therefore be very large, ranging from core topics such as the study of fallacies and paradoxes, to specialized analyses of reasoning such as probability, correct reasoning, and arguments involving causality. One of the aims of logic is to identify the correct (or valid) and incorrect (or fallacious) inferences. Logicians study the criteria for the evaluation of arguments. Foundations of logic Philosophy of logic Analytic-synthetic distinction Antinomy A priori and a posteriori Definition Description Entailment Identity (philosophy) Inference Logical form Logical implication Logical truth Logical consequence Name Necessity Material conditional Meaning (linguistic) Meaning (non-linguistic) Paradox  (list) Possible world Presupposition Probability Quantification Reason Reasoning Reference Semantics Strict conditional Syntax (logic) Truth Truth value Validity Branches of logic Affine logic Alethic logic Aristotelian logic Boolean logic Buddhist logic Bunched logic Categorical logic Classical logic Computability logic Deontic logic Dependence logic Description logic Deviant logic Doxastic logic Epistemic logic First-order logic Formal logic Free logic Fuzzy logic Higher-order logic Infinitary logic Informal logic Intensional logic Intermediate logic Interpretability logic Intuitionistic logic Linear logic Many-valued logic Mathematical logic Metalogic Minimal logic Modal logic Non-Aristotelian logic Non-classical logic Noncommutative logic Non-monotonic logic Ordered logic Paraconsistent logic Philosophical logic Predicate logic Propositional logic P" https://en.wikipedia.org/wiki/List%20of%20formulas%20in%20elementary%20geometry,"This is a short list of some common mathematical shapes and figures and the formulas that describe them. Two-dimensional shapes Sources: Three-dimensional shapes This is a list of volume formulas of basic shapes: Cone – , where is the base's radius Cube – , where is the side's length; Cuboid – , where , , and are the sides' length; Cylinder – , where is the base's radius and is the cone's height; Ellipsoid – , where , , and are the semi-major and semi-minor axes' length; Sphere – , where is the radius; Parallelepiped – , where , , and are the sides' length,, and , , and are angles between the two sides; Prism – , where is the base's area and is the prism's height; Pyramid – , where is the base's area and is the pyramid's height; Tetrahedron – , where is the side's length. Sphere The basic quantities describing a sphere (meaning a 2-sphere, a 2-dimensional surface inside 3-dimensional space) will be denoted by the following variables is the radius, is the circumference (the length of any one of its great circles), is the surface area, is the volume. Surface area: Volume: Radius: Circumference: See also" https://en.wikipedia.org/wiki/Honeywell%20JetWave,"Honeywell's JetWave is a piece of satellite communications hardware produced by Honeywell that enables global in-flight internet connectivity. Its connectivity is provided using Inmarsat’s GX Aviation network. The JetWave platform is used in business and general aviation, as well as defense and commercial airline users. History In 2012, Honeywell announced it would provide Inmarsat with the hardware for its GX Ka-band in-flight connectivity network. The Ka-band (pronounced either ""kay-ay band"" or ""ka band"") is a portion of the microwave part of the electromagnetic spectrum defined as frequencies in the range 27.5 to 31 gigahertz (GHz). In satellite communications, the Ka-band allows higher bandwidth communication. In 2017, after five years and more than 180 flight hours and testing, JetWave was launched as part of GX Aviation with Lufthansa Group. Honeywell’s JetWave was the exclusive terminal hardware option for the Inmarsat GX Aviation network; however, the exclusivity clause in that contract has expired. In July 2019, the United States Air Force selected Honeywell’s JetWave satcom system for 70 of its C-17 Globemaster III cargo planes. In December 2019, it was reported that six AirAsia aircraft had been fitted with Inmarsat’s GX Aviation Ka-band connectivity system and is slated to be implemented fleetwide across AirAsia’s Airbus A320 and A330 models in 2020, requiring installation of JetWave atop AirAsia’s fuselages. Today, Honeywell’s JetWave hardware is installed on over 1,000 aircraft worldwide. In August 2021, the Civil Aviation Administration of China approved a validation of Honeywell’s MCS-8420 JetWave satellite connectivity system for Airbus 320 aircraft. In December 2021, Honeywell, SES, and Hughes Network Systems demonstrated multi-orbit high-speed airborne connectivity for military customers using Honeywell’s JetWave MCX terminal with a Hughes HM-series modem, and SES satellites in both medium Earth orbit (MEO) and geostationary orbit (GEO). T" https://en.wikipedia.org/wiki/Spectrogram,"A spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. When applied to an audio signal, spectrograms are sometimes called sonographs, voiceprints, or voicegrams. When the data are represented in a 3D plot they may be called waterfall displays. Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing, seismology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals. A spectrogram can be generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform or by a wavelet transform (in which case it is also known as a scaleogram or scalogram). A spectrogram is usually depicted as a heat map, i.e., as an image with the intensity shown by varying the colour or brightness. Format A common format is a graph with two geometric dimensions: one axis represents time, and the other axis represents frequency; a third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image. There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as a waterfall plot where the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be either linear or logarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably in decibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships. Generation Spectrograms of light may be created directly using an optical spectrometer over time. Spectrograms may be created from a time-domain signal in one of two ways: approximated as a filterbank that results from a series of band-pass filter" https://en.wikipedia.org/wiki/List%20of%20microorganisms%20used%20in%20food%20and%20beverage%20preparation," List of Useful Microorganisms Used In preparation Of Food And Beverage See also Fermentation (food) Food microbiology" https://en.wikipedia.org/wiki/Measurement%20Studio,"NI Measurement Studio is a set of test and measurement components built by National Instruments, that integrates into the Microsoft Visual Studio environment. It includes extensive support for accessing instrumentation hardware. It has drivers and abstraction layers for many different types of instruments and buses are included or are available for inclusion. Measurement Studio includes a suite of analysis functions, including curve fitting, spectral analysis, Fast fourier transforms (FFT) and digital filters, and visualization. It also includes the ability to share variables and pass data over the internet with network shared variables. History Measurement Studio was introduced in February 2000 by National Instruments to combine its text-based programming tools, specifically: LabWindows/CVI, Component Works ++, Component Works. Measurement Studio 7.0 adopted support for .NET and allowed for native .NET controls and classes to integrate into Visual Studio. As of Measurement Studio 8.0.1, support for Visual Studio 2005 and .NET 2.0 framework have been included, with support for Windows Vista first adopted in version 8.1.1. Current version of Measurement Studio drops support for multiple versions of Visual Studio including 2008, 2005, .NET 2003 and 6.0. Measurement Studio includes a variety of examples, to illustrate how common GPIB, VISA, DAQMX, analysis, and DataSocket applications can be accessed. Related software National Instruments also offers a product called LabVIEW, which offers many of the test, measurement and control capabilities of Measurement Studio. National Instruments also offers LabWindows/CVI. as an alternative for ANSI C programmers. See also Dataflow programming Virtual instrumentation Comparison of numerical analysis software Fourth-generation programming language" https://en.wikipedia.org/wiki/Census%20of%20Antarctic%20Marine%20Life,"The Census of Antarctic Marine Life (CAML) is a field project of the Census of Marine Life that researches the marine biodiversity of Antarctica, how it is affected by climate change, and how this change is altering the ecosystem of the Southern Ocean. The program started in 2005 as a 5-year initiative with the scientific goal being to study the evolution of life in Antarctic waters, to determine how this has influenced the diversity of the present biota, and use these observations to predict how it might respond to future change. However, due to modern and extravagant changes within technology, we are able to witness and influence biodiversity reproduction and development. This enables us to gain further insight toward characteristics that allow such biodiversity to flourish within this barren desert referred to as the Arctic and Antarctic. CAML has collected its data from 18 Antarctic research vessels during the International Polar Year, which is freely accessible at Scientific Committee on Antarctic Research Marine Biodiversity Information Network (SCAR-MarBIN). The Register of Antarctic Marine Species has 9,350 verified species (16,500 taxa) in 17 phyla, from microbes to whales. For 1500 species the DNA barcode is available. The information from CAML is a robust baseline against which future change may be measured." https://en.wikipedia.org/wiki/Shridhar%20Chillal,"Shridhar Chillal (born 29 January 1937) is an Indian man from the city of Pune, who held the world record for the longest fingernails ever reached on a single hand, with a combined length of 909.6 centimeters (358.1 inches). Chillal's longest single nail is his thumbnail, measuring 197.8 centimeters (77.87 inches). He stopped cutting his nails in 1952. Although proud of his record-breaking nails, Chillal has faced increasing difficulties due to the weight of his finger nails, including disfigurement of his fingers and loss of function in his left hand. He claims that nerve damage to his left arm from the nails' immense weight has also caused deafness in his left ear. Chillal has appeared in films and television displaying his nails, such as Jackass 2.5. On 11 July 2018, Chillal had his fingernails cut with a power tool at the Ripley's Believe It or Not! museum in New York City, where the nails will be put on display. A technician wearing protective gear cut the nails during a ""nail clipping ceremony"". See also Lee Redmond, who held the record for the longest fingernails on both hands." https://en.wikipedia.org/wiki/Carbon%20nanotubes%20in%20interconnects,"In nanotechnology, carbon nanotube interconnects refer to the proposed use of carbon nanotubes in the interconnects between the elements of an integrated circuit. Carbon nanotubes (CNTs) can be thought of as single atomic layer graphite sheets rolled up to form seamless cylinders. Depending on the direction on which they are rolled, CNTs can be semiconducting or metallic. Metallic carbon nanotubes have been identified as a possible interconnect material for the future technology generations and to replace copper interconnects. Electron transport can go over long nanotube lengths, 1 μm, enabling CNTs to carry very high currents (i.e. up to a current density of 109 A∙cm−2) with essentially no heating due to nearly one dimensional electronic structure. Despite the current saturation in CNTs at high fields, the mitigation of such effects is possible due to encapsulated nanowires. Carbon nanotubes for interconnects application in Integrated chips have been studied since 2001, however the extremely attractive performances of individual tubes are difficult to reach when they are assembled in large bundles necessary to make real via or lines in integrated chips. Two proposed approaches to overcome the to date limitations are either to make very tiny local connections that will be needed in future advanced chips or to make carbon metal composite structure that will be compatible with existing microelectronic processes. Hybrid interconnects that employ CNT vias in tandem with copper interconnects may offer advantages in reliability and thermal-management. In 2016, the European Union has funded a four million euro project over three years to evaluate manufacturability and performance of composite interconnects employing both CNT and copper interconnects. The project named CONNECT (CarbON Nanotube compositE InterconneCTs) involves the joint efforts of seven European research and industry partners on fabrication techniques and processes to enable reliable carbon nanotubes for " https://en.wikipedia.org/wiki/Additive%20combinatorics,"Additive combinatorics is an area of combinatorics in mathematics. One major area of study in additive combinatorics are inverse problems: given the size of the sumset A + B is small, what can we say about the structures of and ? In the case of the integers, the classical Freiman's theorem provides a partial answer to this question in terms of multi-dimensional arithmetic progressions. Another typical problem is to find a lower bound for in terms of and . This can be viewed as an inverse problem with the given information that is sufficiently small and the structural conclusion is then of the form that either or is the empty set; however, in literature, such problems are sometimes considered to be direct problems as well. Examples of this type include the Erdős–Heilbronn Conjecture (for a restricted sumset) and the Cauchy–Davenport Theorem. The methods used for tackling such questions often come from many different fields of mathematics, including combinatorics, ergodic theory, analysis, graph theory, group theory, and linear algebraic and polynomial methods. History of additive combinatorics Although additive combinatorics is a fairly new branch of combinatorics (in fact the term additive combinatorics was coined by Terence Tao and Van H. Vu in their book in 2000's), an extremely old problem Cauchy–Davenport theorem is one of the most fundamental results in this field. Cauchy–Davenport theorem Suppose that A and B are finite subsets of the cyclic group for a prime , then the following inequality holds. Vosper's theorem Now we have the inequality for the cardinality of the sum set , it is natural to ask the inverse problem, namely under what conditions on and does the equality hold? Vosper's theorem answers this question. Suppose that (that is, barring edge cases) and then and are arithmetic progressions with the same difference. This illustrates the structures that are often studied in additive combinatorics: the combinatorial structure of a" https://en.wikipedia.org/wiki/Barcode%20reader,"A barcode reader or barcode scanner is an optical scanner that can read printed barcodes, decode the data contained in the barcode to a computer. Like a flatbed scanner, it consists of a light source, a lens and a light sensor for translating optical impulses into electrical signals. Additionally, nearly all barcode readers contain decoder circuitry that can analyse the barcode's image data provided by the sensor and send the barcode's content to the scanner's output port. Types of barcode scanners Technology Barcode readers can be differentiated by technologies as follows: Pen-type readers Pen-type readers consist of a light source and photodiode that are placed next to each other in the tip of a pen. To read a barcode, the person holding the pen must move the tip of it across the bars at a relatively uniform speed. The photodiode measures the intensity of the light reflected back from the light source as the tip crosses each bar and space in the printed code. The photodiode generates a waveform that is used to measure the widths of the bars and spaces in the barcode. Dark bars in the barcode absorb light and white spaces reflect light so that the voltage waveform generated by the photodiode is a representation of the bar and space pattern in the barcode. This waveform is decoded by the scanner in a manner similar to the way Morse code dots and dashes are decoded. Laser scanners Laser scanners direct the laser beam back and forth across the barcode. As with the pen-type reader, a photo-diode is used to measure the intensity of the light reflected back from the barcode. In both pen readers and laser scanners, the light emitted by the reader is rapidly varied in brightness with a data pattern and the photo-diode receive circuitry is designed to detect only signals with the same modulated pattern. CCD readers (also known as LED scanners) Charge-coupled device (CCD) readers use an array of hundreds of tiny light sensors lined up in a row in the head of the r" https://en.wikipedia.org/wiki/Generic%20Array%20Logic,"The Generic Array Logic (also known as GAL and sometimes as gate array logic) device was an innovation of the PAL and was invented by Lattice Semiconductor. The GAL was an improvement on the PAL because one device type was able to take the place of many PAL device types or could even have functionality not covered by the original range of PAL devices. Its primary benefit, however, was that it was erasable and re-programmable, making prototyping and design changes easier for engineers. A similar device called a PEEL (programmable electrically erasable logic) was introduced by the International CMOS Technology (ICT) corporation. See also Programmable logic device (PLD) Complex programmable logic device (CPLD) Erasable programmable logic device (EPLD) GAL22V10" https://en.wikipedia.org/wiki/Modified%20Wigner%20distribution%20function,"Note: the Wigner distribution function is abbreviated here as WD rather than WDF as used at Wigner distribution function A Modified Wigner distribution function is a variation of the Wigner distribution function (WD) with reduced or removed cross-terms. The Wigner distribution (WD) was first proposed for corrections to classical statistical mechanics in 1932 by Eugene Wigner. The Wigner distribution function, or Wigner–Ville distribution (WVD) for analytic signals, also has applications in time frequency analysis. The Wigner distribution gives better auto term localisation compared to the smeared out spectrogram (SP). However, when applied to a signal with multi frequency components, cross terms appear due to its quadratic nature. Several methods have been proposed to reduce the cross terms. For example, in 1994 L. Stankovic proposed a novel technique, now mostly referred to as S-method, resulting in the reduction or removal of cross terms. The concept of the S-method is a combination between the spectrogram and the Pseudo Wigner Distribution (PWD), the windowed version of the WD. The original WD, the spectrogram, and the modified WDs all belong to the Cohen's class of bilinear time-frequency representations : where is Cohen's kernel function, which is often a low-pass function, and normally serves to mask out the interference in the original Wigner representation. Mathematical definition Wigner distribution Cohen's kernel function : Spectrogram where is the short-time Fourier transform of . Cohen's kernel function : which is the WD of the window function itself. This can be verified by applying the convolution property of the Wigner distribution function. The spectrogram cannot produce interference since it is a positive-valued quadratic distribution. Modified form I Can't solve the cross term problem, however it can solve the problem of 2 components time difference larger than window size B. Modified form II Modified form III (Pseudo L-Wign" https://en.wikipedia.org/wiki/VyOS,"VyOS is an open source network operating system based on Debian. VyOS provides a free routing platform that competes directly with other commercially available solutions from well-known network providers. Because VyOS is run on standard amd64 systems, it can be used as a router and firewall platform for cloud deployments. History After Brocade Communications stopped development of the Vyatta Core Edition of the Vyatta Routing software, a small group of enthusiasts in 2013 took the last Community Edition, and worked on building an Open Source fork to live on in place of the end of life VC. Features BGP (IPv4 and IPv6), OSPF (v2 and v3), RIP and RIPng, policy-based routing. IPsec, VTI, VXLAN, L2TPv3, L2TP/IPsec and PPTP servers, tunnel interfaces (GRE, IPIP, SIT), OpenVPN in client, server, or site-to-site modes, WireGuard. Stateful firewall, zone-based firewall, all types of source and destination NAT (one to one, one to many, many to many). DHCP and DHCPv6 server and relay, IPv6 RA, DNS forwarding, TFTP server, web proxy, PPPoE access concentrator, NetFlow/sFlow sensor, QoS. VRRP for IPv4 and IPv6, ability to execute custom health checks and transition scripts; ECMP, stateful load balancing. Built-in versioning. Releases VyOS version 1.0.0 (Hydrogen) was released on December 22, 2013. On October 9, 2014, version 1.1.0 (Helium) was released. All versions released thus far have been based on Debian 6.0 (Squeeze), and are available as a 32-bit images and 64-bit images for both physical and virtual machines. On January 28, 2019, version 1.2.0 (Crux) was released. Version 1.2.0 is based on Debian 8 (Jessie). While version 1.0 and 1.1 were named after elements, a new naming scheme based on constellations is used from version 1.2. Release History VMware Support The VyOS OVA image for VMware was released with the February 3, 2014 maintenance release. It allows a convenient setup of VyOS on a VMware platform and includes all of the VMware tools and paravirtual " https://en.wikipedia.org/wiki/Lobachevsky%20%28song%29,"""Lobachevsky"" is a humorous song by Tom Lehrer, referring to the mathematician Nikolai Ivanovich Lobachevsky. According to Lehrer, the song is ""not intended as a slur on [Lobachevsky's] character"" and the name was chosen ""solely for prosodic reasons"". In the introduction, Lehrer describes the song as an adaptation of a routine that Danny Kaye did to honor the Russian actor Constantin Stanislavski. (The Danny Kaye routine is sung from the perspective of a famous Russian actor who learns and applies Stanislavski's secret to method acting: ""Suffer."") Lehrer sings the song from the point of view of an eminent Russian mathematician who learns from Lobachevsky that plagiarism is the secret of success in mathematics (""only be sure always to call it please 'research'""). The narrator later uses this strategy to get a paper published ahead of a rival, then to write a book and earn a fortune selling the movie rights. Lehrer wrote that he did not know Russian. In the song he quotes two ""book reviews"" in Russian; the first is a long sentence that he then translates succinctly as ""It stinks"". The second, a different but equally long sentence, is also translated as ""It stinks."" The actual text of these sentences bear no relation to academics: the first phrase quotes Mussorgsky's ""Song of the Flea"": The second references a Russian joke: [the bathroom]. The song was first performed as part of The Physical Revue, a 1951–1952 musical revue by Lehrer and a few other professors. It is track 6 on Songs by Tom Lehrer, which was re-released as part of Songs & More Songs by Tom Lehrer and The Remains of Tom Lehrer. In this early version, Ingrid Bergman is named to star in the role of ""the Hypotenuse"" in The Eternal Triangle, a film purportedly based on the narrator's book. It was recorded again for Revisited (Tom Lehrer album), with Brigitte Bardot as the Hypotenuse. A third recording is included in Tom Lehrer Discovers Australia (And Vice Versa), a live album recorded in Australia, f" https://en.wikipedia.org/wiki/DOPIPE,"DOPIPE parallelism is a method to perform loop-level parallelism by pipelining the statements in a loop. Pipelined parallelism may exist at different levels of abstraction like loops, functions and algorithmic stages. The extent of parallelism depends upon the programmers' ability to make best use of this concept. It also depends upon factors like identifying and separating the independent tasks and executing them parallelly. Background The main purpose of employing loop-level parallelism is to search and split sequential tasks of a program and convert them into parallel tasks without any prior information about the algorithm. Parts of data that are recurring and consume significant amount of execution time are good candidates for loop-level parallelism. Some common applications of loop-level parallelism are found in mathematical analysis that uses multiple-dimension matrices which are iterated in nested loops. There are different kind of parallelization techniques which are used on the basis of data storage overhead, degree of parallelization and data dependencies. Some of the known techniques are: DOALL, DOACROSS and DOPIPE. DOALL: This technique is used where we can parallelize each iteration of the loop without any interaction between the iterations. Hence, the overall run-time gets reduced from N * T (for a serial processor, where T is the execution time for each iteration) to only T (since all the N iterations are executed in parallel). DOACROSS: This technique is used wherever there is a possibility for data dependencies. Hence, we parallelize tasks in such a manner that all the data independent tasks are executed in parallel, but the dependent ones are executed sequentially. There is a degree of synchronization used to sync the dependent tasks across parallel processors. Description DOPIPE is a pipelined parallelization technique that is used in programs where each element produced during each iteration is consumed in the later iteration. The followin" https://en.wikipedia.org/wiki/Shriek%20map,"In category theory, a branch of mathematics, certain unusual functors are denoted and with the exclamation mark used to indicate that they are exceptional in some way. They are thus accordingly sometimes called shriek maps, with ""shriek"" being slang for an exclamation mark, though other terms are used, depending on context. Usage Shriek notation is used in two senses: To distinguish a functor from a more usual functor or accordingly as it is covariant or contravariant. To indicate a map that goes ""the wrong way"" – a functor that has the same objects as a more familiar functor, but behaves differently on maps and has the opposite variance. For example, it has a pull-back where one expects a push-forward. Examples In algebraic geometry, these arise in image functors for sheaves, particularly Verdier duality, where is a ""less usual"" functor. In algebraic topology, these arise particularly in fiber bundles, where they yield maps that have the opposite of the usual variance. They are thus called wrong way maps, Gysin maps, as they originated in the Gysin sequence, or transfer maps. A fiber bundle with base space B, fiber F, and total space E, has, like any other continuous map of topological spaces, a covariant map on homology and a contravariant map on cohomology However, it also has a covariant map on cohomology, corresponding in de Rham cohomology to ""integration along the fiber"", and a contravariant map on homology, corresponding in de Rham cohomology to ""pointwise product with the fiber"". The composition of the ""wrong way"" map with the usual map gives a map from the homology of the base to itself, analogous to a unit/counit of an adjunction; compare also Galois connection. These can be used in understanding and proving the product property for the Euler characteristic of a fiber bundle. Notes Mathematical notation Algebraic geometry Algebraic topology" https://en.wikipedia.org/wiki/MAVLink,"MAVLink or Micro Air Vehicle Link is a protocol for communicating with small unmanned vehicle. It is designed as a header-only message marshaling library. MAVLink was first released early 2009 by Lorenz Meier under the LGPL license. Applications It is used mostly for communication between a Ground Control Station (GCS) and Unmanned vehicles, and in the inter-communication of the subsystem of the vehicle. It can be used to transmit the orientation of the vehicle, its GPS location and speed. Packet Structure In version 1.0 the packet structure is the following: After Version 2, the packet structure was expanded into the following: CRC field To ensure message integrity a cyclic redundancy check (CRC) is calculated to every message into the last two bytes. Another function of the CRC field is to ensure the sender and receiver both agree in the message that is being transferred. It is computed using an ITU X.25/SAE AS-4 hash of the bytes in the packet, excluding the Start-of-Frame indicator (so 6+n+1 bytes are evaluated, the extra +1 is the seed value). Additionally a seed value is appended to the end of the data when computing the CRC. The seed is generated with every new message set of the protocol, and it is hashed in a similar way as the packets from each message specifications. Systems using the MAVLink protocol can use a precomputed array to this purpose. The CRC algorithm of MAVLink has been implemented in many languages, like Python and Java. Messages The payload from the packets described above are MAVLink messages. Every message is identifiable by the ID field on the packet, and the payload contains the data from the message. An XML document in the MAVlink source has the definition of the data stored in this payload. Below is the message with ID 24 extracted from the XML document. The global position, as returned by the Global Positioning System (GPS). This is NOT the global position estimate of" https://en.wikipedia.org/wiki/Cyphal,"Cyphal is a lightweight protocol designed for reliable intra-vehicle communications using various communications transports, originally destined for CAN bus, but targeting various network types in subsequent revisions. OpenCyphal is an open-source project that aims to provide MIT-licensed implementations of the Cyphal protocol. The project was known as UAVCAN (Uncomplicated Application-level Vehicular Computing and Networking) prior to rebranding in March 2022. History The first RFC broadly outlining the general ideas that would later form the core design principles of Cyphal (branded UAVCAN at the time) was published in early 2014. It was a response to the perceived lack of adequate technology that could facilitate robust real-time intra-vehicular data exchange between distributed components of modern intelligent vehicles (primarily unmanned aircraft). Since the original RFC, the protocol has been through three major design iterations, which culminated in the release of the first long-term stable revision in 2020 (6 years later) labelled UAVCAN v1.0. In the meantime, the protocol has been deployed in numerous diverse systems including unmanned aerial vehicles, spacecraft, underwater robots, racing cars, general robotic systems, and micromobility vehicles. In 2022, the protocol was rebranded as Cyphal. Cyphal is positioned by its developers as a highly deterministic, safety-oriented alternative to high-level publish-subscribe frameworks such as DDS or the computation graph of ROS, which is sufficiently compact and simple to be usable in deeply embedded high-integrity applications. Cyphal has been shown to be usable with bare metal microcontrollers equipped with as little as 32K ROM and 8K RAM. The protocol is open and can be reused freely without approval or licensing fees. The development of the core standard and its reference implementations is conducted in an open manner, coordinated via the public discussion forum. As of 2020, the project is supported by sev" https://en.wikipedia.org/wiki/Fibre%20Channel%20frame,"In computer networking, a Fibre Channel frame is the frame of the Fibre Channel protocol. The basic building blocks of an FC connection are the frames. They contain the information to be transmitted (payload), the address of the source and destination ports and link control information. Frames are broadly categorized as Data frames Link_control frames Data frames may be used as Link_Data frames and Device_Data frames, link control frames are classified as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The primary function of the Fabric is to receive the frames from the source port and route them to the destination port. It is the FC-2 layer's responsibility to break the data to be transmitted into frame size, and reassemble the frames. Each frame begins and ends with a frame delimiter. The frame header immediately follows the Start of Frame (SOF) delimiter. The frame header is used to control link applications, control device protocol transfers, and detect missing or out of order frames. Optional headers may contain further link control information. A maximum 2048 byte long field (payload) contains the information to be transferred from a source N_Port to a destination N_Port. The 4 byte Cyclic Redundancy Check (CRC) precedes the End of Frame (EOF) delimiter. The CRC is used to detect transmission errors. The maximum total frame length is 2148 bytes. Between successive frames a sequence of (at least) six primitives must be transmitted, sometimes called interframe gap." https://en.wikipedia.org/wiki/N-topological%20space,"In mathematics, an N-topological space is a set equipped with N arbitrary topologies. If τ1, τ2, ..., τN are N topologies defined on a nonempty set X, then the N-topological space is denoted by (X,τ1,τ2,...,τN). For N = 1, the structure is simply a topological space. For N = 2, the structure becomes a bitopological space introduced by J. C. Kelly. Example Let X = {x1, x2, ...., xn} be any finite set. Suppose Ar = {x1, x2, ..., xr}. Then the collection τ1 = {φ, A1, A2, ..., An = X} will be a topology on X. If τ1, τ2, ..., τm be m such topologies (chain topologies) defined on X, then the structure (X, τ1, τ2, ..., τm) is an ''m''-topological space." https://en.wikipedia.org/wiki/Network%20search%20engine,"Computer networks are connected together to form larger networks such as campus networks, corporate networks, or the Internet. Routers are network devices that may be used to connect these networks (e.g., a home network connected to the network of an Internet service provider). When a router interconnects many networks or handles much network traffic, it may become a bottleneck and cause network congestion (i.e., traffic loss). A number of techniques have been developed to prevent such problems. One of them is the network search engine (NSE), also known as network search element. This special-purpose device helps a router perform one of its core and repeated functions very fast: address lookup. Besides routing, NSE-based address lookup is also used to keep track of network service usage for billing purposes, or to look up patterns of information in the data passing through the network for security reasons . Network search engines are often available as ASIC chips to be interfaced with the network processor of the router. Content-addressable memory and Trie are two techniques commonly used when implementing NSEs." https://en.wikipedia.org/wiki/Service%20account,"A service account or application account is a digital identity used by an application software or service to interact with other applications or the operating system. They are often used for machine to machine communication (M2M), for example for application programming interfaces (API). The service account may be a privileged identity within the context of the application. Updating passwords Local service accounts can interact with various components of the operating system, which makes coordination of password changes difficult. In practice this causes passwords for service accounts to rarely be changed, which poses a considerable security risk for an organization. Some types of service accounts do not have a password. Wide access Service accounts are often used by applications for access to databases, running batch jobs or scripts, or for accessing other applications. Such privileged identities often have extensive access to an organization's underlying data stores laying in applications or databases. Passwords for such accounts are often built and saved in plain textfiles, which is a vulnerability which may be replicated across several servers to provide fault tolerance for applications. This vulnerability poses a significant risk for an organization since the application often hosts the type of data which is interesting to advanced persistent threats. Service accounts are non-personal digital identities and can be shared. Misuse Google Cloud lists several possibilities for misuse of service accounts: Privilege escalation: Someone impersonates the service account Spoofing: Someone impersonates the service account to hide their identity Non-repudiation: Performing actions on their behalf with a service account in cases where it is not possible to trace the actions of the abuser Information disclosure: Unauthorized persons extract information about infrastructure, applications or processes See also Kerberos Service Account, a service account in Ker" https://en.wikipedia.org/wiki/Degranulation,"Degranulation is a cellular process that releases antimicrobial cytotoxic or other molecules from secretory vesicles called granules found inside some cells. It is used by several different cells involved in the immune system, including granulocytes (neutrophils, basophils, and eosinophils) and mast cells. It is also used by certain lymphocytes such as natural killer (NK) cells and cytotoxic T cells, whose main purpose is to destroy invading microorganisms. Mast cells Degranulation in mast cells is part of an inflammatory response, and substances such as histamine are released. Granules from mast cells mediate processes such as ""vasodilation, vascular homeostasis, innate and adaptive immune responses, angiogenesis, and venom detoxification."" Antigens interact with IgE molecules already bound to high affinity Fc receptors on the surface of mast cells to induce degranulation, via the activation of tyrosine kinases within the cell. The mast cell releases a mixture of compounds, including histamine, proteoglycans, serotonin, and serine proteases from its cytoplasmic granules. Eosinophils In a similar mechanism, activated eosinophils release preformed mediators such as major basic protein, and enzymes such as peroxidase, following interaction between their Fc receptors and IgE molecules that are bound to large parasites like helminths. Neutrophils Degranulation in neutrophils can occur in response to infection, and the resulting granules are released in order to protect against tissue damage. Excessive degranulation of neutrophils, sometimes triggered by bacteria, is associated with certain inflammatory disorders, such as asthma and septic shock. Four kinds of granules exist in neutrophils that display differences in content and regulation. Secretory vesicles are the most likely to release their contents by degranulation, followed by gelatinase granules, specific granules, and azurophil granules. Cytotoxic T cells and NK cells Cytotoxic T cells and NK cells rele" https://en.wikipedia.org/wiki/Atomic%20and%20molecular%20astrophysics,"Atomic astrophysics is concerned with performing atomic physics calculations that will be useful to astronomers and using atomic data to interpret astronomical observations. Atomic physics plays a key role in astrophysics as astronomers' only information about a particular object comes through the light that it emits, and this light arises through atomic transitions. Molecular astrophysics, developed into a rigorous field of investigation by theoretical astrochemist Alexander Dalgarno beginning in 1967, concerns the study of emission from molecules in space. There are 110 currently known interstellar molecules. These molecules have large numbers of observable transitions. Lines may also be observed in absorption—for example the highly redshifted lines seen against the gravitationally lensed quasar PKS1830-211. High energy radiation, such as ultraviolet light, can break the molecular bonds which hold atoms in molecules. In general then, molecules are found in cool astrophysical environments. The most massive objects in our galaxy are giant clouds of molecules and dust known as giant molecular clouds. In these clouds, and smaller versions of them, stars and planets are formed. One of the primary fields of study of molecular astrophysics is star and planet formation. Molecules may be found in many environments, however, from stellar atmospheres to those of planetary satellites. Most of these locations are relatively cool, and molecular emission is most easily studied via photons emitted when the molecules make transitions between low rotational energy states. One molecule, composed of the abundant carbon and oxygen atoms, and very stable against dissociation into atoms, is carbon monoxide (CO). The wavelength of the photon emitted when the CO molecule falls from its lowest excited state to its zero energy, or ground, state is 2.6mm, or 115 gigahertz. This frequency is a thousand times higher than typical FM radio frequencies. At these high frequencies, molecules in th" https://en.wikipedia.org/wiki/Trillium%20Digital%20Systems,"Trillium Digital Systems, Inc. developed and licensed standards-based communications source code software to telecommunications equipment manufacturers for the wireless, broadband, Internet and telephone network infrastructure. Trillium was an early company to license source code. The Trillium Digital Systems business entity no longer exists, but the Trillium communications software is still developed and licensed. Trillium software is used in the network infrastructure as well as associated service platforms, clients and devices. Company history Trillium Trillium was founded in February 1988 in Los Angeles, California. The co-founders were Jeff Lawrence and Larisa Chistyakov. Giorgio Propersi joined in September 1989. The initial capitalization of Trillium when it was incorporated was $1,000. The name Trillium came about because of a mistake. Jeff and Larisa asked for company name suggestions from family and friends. Someone suggested a character named Trillian from the book Hitchhiker's Guide to the Galaxy by Douglas Adams. They thought the suggestion was supposed to be trillium, a flower in the lily family. They liked the sound and symbolism of the name Trillium so they used it. Trillium was started as a consulting company. Its first consulting jobs were to develop communications software for bisynchronous, asynchronous and multiprotocol PAD products. Consulting continued through the end of 1990. While consulting, the co-founders decided there was an opportunity to develop and license portable source code software for communications protocols. Towards the end of 1990 Trillium became focused on developing its own products. Source code is a symbolic language (e.g., the C programming language) which is run through a compiler to generate binary code which can run on a particular microprocessor. Communications systems have a variety of hardware and software architectures, use a variety of microprocessors and use a variety of software development environments. It " https://en.wikipedia.org/wiki/The%20Bridges%20Organization,"The Bridges Organization is an organization that was founded in Kansas, United States, in 1998 with the goal of promoting interdisciplinary work in mathematics and art. The Bridges Conference is an annual conference on connections between art and mathematics. The conference features papers, educational workshops, an art exhibition, a mathematical poetry reading, and a short movie festival. List of Bridges conferences" https://en.wikipedia.org/wiki/Bauer%20maximum%20principle,"Bauer's maximum principle is the following theorem in mathematical optimization: Any function that is convex and continuous, and defined on a set that is convex and compact, attains its maximum at some extreme point of that set. It is attributed to the German mathematician Heinz Bauer. Bauer's maximum principle immediately implies the analogue minimum principle: Any function that is concave and continuous, and defined on a set that is convex and compact, attains its minimum at some extreme point of that set. Since a linear function is simultaneously convex and concave, it satisfies both principles, i.e., it attains both its maximum and its minimum at extreme points. Bauer's maximization principle has applications in various fields, for example, differential equations and economics." https://en.wikipedia.org/wiki/Alpha%20strike%20%28engineering%29,"Alpha strike is a term referring to the event when an alpha particle, a composite charged particle composed of two protons and two neutrons, enters a computer and modifies the data or operation of a component in the computer. Alpha strikes can disturb the silicon substrate of the transistors in a computer through their electronic stopping power, causing the transistor to flip states if the charge imparted by the strike crosses a critical threshold (QCrit). This, in turn, can corrupt the information stored by that transistor and create a cascading effect on the operation of the component that encases it. History The first widely recognized radiation-generated error in a computer was the appearance of random errors in the Intel 4k 2107 DRAM in the late 1970s. This problem was investigated by Timothy C. Mays and Murray H. Woods, who (in 1979) reported that the errors were caused by alpha decay from trace amounts of uranium and thorium induced in the seminal paper surrounding the chip. Since then, there have been multiple incidents of computer errors due to radiation, including error reports from computers onboard spacecraft, corrupted data from voting machines, and crashes on computers onboard aircraft. According to a study from Hughes Aircraft Company, anomalies in satellite communication attributed to galactic cosmic radiation is on the order of (3.1×10−3) transistors per year. This rate is an estimate of the number of noticeable cascading errors in communication between satellites per satellite. Modern Impact Alpha strikes are limiting the computing capabilities of computers onboard high-altitude vehicles as the energy an alpha particle imparts on the transistors of a computer is far more consequential for smaller transistors. As a result, computers with smaller transistors and higher computing capability are more prone to errors and crashes than computers with larger transistors. One potential solution for optimizing the performance of computers onboard sp" https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Bacon%20number,"A person's Erdős–Bacon number is the sum of one's Erdős number—which measures the ""collaborative distance"" in authoring academic papers between that person and Hungarian mathematician Paul Erdős—and one's Bacon number—which represents the number of links, through roles in films, by which the person is separated from American actor Kevin Bacon. The lower the number, the closer a person is to Erdős and Bacon, which reflects a small world phenomenon in academia and entertainment. To have a defined Erdős–Bacon number, it is necessary to have both appeared in a film and co-authored an academic paper, although this in and of itself is not sufficient as one's co-authors must have a known chain leading to Paul Erdős, and one's film must have actors eventually leading to Kevin Bacon. Academic scientists Mathematician Daniel Kleitman has an Erdős–Bacon number of 3. He co-authored papers with Erdős and has a Bacon number of 2 via Minnie Driver in Good Will Hunting; Driver and Bacon appeared together in Sleepers. Like Kleitman, mathematician Bruce Reznick has co-authored a paper with Erdős and has a Bacon number of 2, via Roddy McDowall in the film Pretty Maids All in a Row, giving him an Erdős–Bacon number of 3 as well. Physicist Nicholas Metropolis has an Erdős number of 2, and also a Bacon number of 2, giving him an Erdős–Bacon number of 4. Metropolis and Richard Feynman both worked on the Manhattan Project at Los Alamos Laboratory. Via Metropolis, Feynman has an Erdős number of 3 and, from having appeared in the film Anti-Clock alongside Tony Tang, Feynman also has a Bacon number of 3. Richard Feynman thus has an Erdős–Bacon number of 6. Theoretical physicist Stephen Hawking has an Erdős–Bacon number of 6: his Bacon number of 2 (via his appearance alongside John Cleese in Monty Python Live (Mostly), who acted alongside Kevin Bacon in The Big Picture) is lower than his Erdős number of 4. Similarly to Stephen Hawking, scientist Carl Sagan has an Erdős–Bacon number of 6" https://en.wikipedia.org/wiki/Wavelet,"A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a ""brief oscillation"". A taxonomy of wavelets has been established, based on the number and direction of its pulses. Wavelets are imbued with specific properties that make them useful for signal processing. For example, a wavelet could be created to have a frequency of Middle C and a short duration of roughly one tenth of a second. If this wavelet were to be convolved with a signal created from the recording of a melody, then the resulting signal would be useful for determining when the Middle C note appeared in the song. Mathematically, a wavelet correlates with a signal if a portion of the signal is similar. Correlation is at the core of many practical wavelet applications. As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including but not limited to audio signals and images. Sets of wavelets are needed to analyze data fully. ""Complementary"" wavelets decompose a signal without gaps or overlaps so that the decomposition process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet-based compression/decompression algorithms, where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square-integrable functions. This is accomplished through coherent states. In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a lase" https://en.wikipedia.org/wiki/List%20of%20mathematical%20societies,"This article provides a list of mathematical societies. International African Mathematical Union Association for Women in Mathematics Circolo Matematico di Palermo European Mathematical Society European Women in Mathematics Foundations of Computational Mathematics International Association for Cryptologic Research International Association of Mathematical Physics International Linear Algebra Society International Mathematical Union International Statistical Institute International Society for Analysis, its Applications and Computation International Society for Mathematical Sciences Kurt Gödel Society Mathematical Council of the Americas (MCofA) Mathematical Society of South Eastern Europe (MASSEE) Mathematical Optimization Society Maths Society Ramanujan Mathematical Society Quaternion Society Society for Industrial and Applied Mathematics Southeast Asian Mathematical Society (SEAMS) Spectra (mathematical association) Unión Matemática de América Latina y el Caribe (UMALCA) Young Mathematicians Network Honor societies Kappa Mu Epsilon Mu Alpha Theta Pi Mu Epsilon National and subnational Arranged as follows: Society name in English (Society name in home-language; Abbreviation if used), Country and/or subregion/city if not specified in name. This list is sorted by continent. Africa Algeria Mathematical Society Gabon Mathematical Society South African Mathematical Society Asia Bangladesh Mathematical Society Calcutta Mathematical Society (CalMathSoc), Kolkata, India Chinese Mathematical Society Indian Mathematical Society Iranian Mathematical Society Israel Mathematical Union Jadavpur University Mathematical Society (JMS), Jadavpur, India Kerala Mathematical Association, Kerala State, India Korean Mathematical Society, South Korea Mathematical Society of Japan Mathematical Society of the Philippines Pakistan Mathematical Society Turkish Mathematical Society Europe Albanian Mathematical Association Armenian Mathematic" https://en.wikipedia.org/wiki/List%20of%20factorial%20and%20binomial%20topics,"This is a list of factorial and binomial topics in mathematics. See also binomial (disambiguation). Abel's binomial theorem Alternating factorial Antichain Beta function Bhargava factorial Binomial coefficient Pascal's triangle Binomial distribution Binomial proportion confidence interval Binomial-QMF (Daubechies wavelet filters) Binomial series Binomial theorem Binomial transform Binomial type Carlson's theorem Catalan number Fuss–Catalan number Central binomial coefficient Combination Combinatorial number system De Polignac's formula Difference operator Difference polynomials Digamma function Egorychev method Erdős–Ko–Rado theorem Euler–Mascheroni constant Faà di Bruno's formula Factorial Factorial moment Factorial number system Factorial prime Gamma distribution Gamma function Gaussian binomial coefficient Gould's sequence Hyperfactorial Hypergeometric distribution Hypergeometric function identities Hypergeometric series Incomplete beta function Incomplete gamma function Jordan–Pólya number Kempner function Lah number Lanczos approximation Lozanić's triangle Macaulay representation of an integer Mahler's theorem Multinomial distribution Multinomial coefficient, Multinomial formula, Multinomial theorem Multiplicities of entries in Pascal's triangle Multiset Multivariate gamma function Narayana numbers Negative binomial distribution Nörlund–Rice integral Pascal matrix Pascal's pyramid Pascal's simplex Pascal's triangle Permutation List of permutation topics Pochhammer symbol (also falling, lower, rising, upper factorials) Poisson distribution Polygamma function Primorial Proof of Bertrand's postulate Sierpinski triangle Star of David theorem Stirling number Stirling transform Stirling's approximation Subfactorial Table of Newtonian series Taylor series Trinomial expansion Vandermonde's identity Wilson prime Wilson's theorem Wolstenholme prime Factorial and binomial topics" https://en.wikipedia.org/wiki/Food%20psychology,"Food psychology is the psychological study of how people choose the food they eat (food choice), along with food and eating behaviors. Food psychology is an applied psychology, using existing psychological methods and findings to understand food choice and eating behaviors. Factors studied by food psychology include food cravings, sensory experiences of food, perceptions of food security and food safety, price, available product information such as nutrition labeling and the purchasing environment (which may be physical or online). Food psychology also encompasses broader sociocultural factors such as cultural perspectives on food, public awareness of ""what constitutes a sustainable diet"", and food marketing including ""food fraud"" where ingredients are intentionally motivated for economic gain as opposed to nutritional value. These factors are considered to interact with each other along with an individual's history of food choices to form new food choices and eating behaviors. The development of food choice is considered to fall into three main categories: properties of the food, individual differences and sociocultural influences. Food psychology studies psychological aspects of individual differences, although due to the interaction between factors and the variance in definitions, food psychology is often studied alongside other aspects of food choice including nutrition psychology. , there are no specific journals for food psychology, with research being published in both nutrition and psychology journals. Eating behaviors which are analysed by food psychology include disordered eating, behavior associated with food neophobia, and the public broadcasting/streaming of eating (mukbang). Food psychology has been studied extensively using theories of cognitive dissonance and fallacious reasoning. COVID-19 Food psychology has been used to examine how eating behaviors have been globally affected by the COVID-19 pandemic. Changed food preferences due to COVID-19 " https://en.wikipedia.org/wiki/NE1000,"The NE1000 and NE2000 are members of an early line of low cost Ethernet network cards introduced by Novell in 1987. Its popularity had a significant impact on the pervasiveness of networks in computing. They are based on a National Semiconductor prototype design using their 8390 Ethernet chip. History In the late 1980s, Novell was looking to shed its hardware server business and transform its flagship NetWare product into a PC-based server operating system that was agnostic and independent of the physical network implementation and topology (Novell even referred to NetWare as a NOS, or network operating system). To do this, Novell needed networking technology in general — and networking cards in particular — to become a commodity, so that the server operating system and protocols would become the differentiating technology. Most of the key pieces of this strategy were already in place: Ethernet and Token Ring (among others) had been codified by the IEEE 802 standards committee — the draft was not formally adopted until 1990, but was already in widespread use, and cards from one vendor were, on the whole, wire-compatible with cards complying with the same 802 working group. However, networking hardware vendors in general, and industry leaders 3Com and IBM in particular, were charging high prices for their hardware. To combat this, Novell decided to develop its own line of cards. In order to create these at minimal R&D, engineering and production costs, Novell based their board on DP839EB, a reference design created by National Semiconductor using the 8390 Ethernet chip. Compared to the reference design, Novell used Programmed I/O instead of the slower ISA DMA. Novell’s design also didn’t map the card’s internal buffer RAM into the host’s address space. The original card, the NE1000 (8-bit ISA; announced as ""E-Net adapter"" in February 1987 for ) The ""NE"" prefix stood for ""Novell Ethernet"". NE2000 The NE2000, using the 16-bit ISA bus of the PC AT followed in 19" https://en.wikipedia.org/wiki/Hardware%20acceleration,"Hardware acceleration is the use of computer hardware designed to perform specific functions more efficiently when compared to software running on a general-purpose central processing unit (CPU). Any transformation of data that can be calculated in software running on a generic CPU can also be calculated in custom-made hardware, or in some mix of both. To perform computing tasks more efficiently, generally one can invest time and money in improving the software, improving the hardware, or both. There are various approaches with advantages and disadvantages in terms of decreased latency, increased throughput and reduced energy consumption. Typical advantages of focusing on software may include greater versatility, more rapid development, lower non-recurring engineering costs, heightened portability, and ease of updating features or patching bugs, at the cost of overhead to compute general operations. Advantages of focusing on hardware may include speedup, reduced power consumption, lower latency, increased parallelism and bandwidth, and better utilization of area and functional components available on an integrated circuit; at the cost of lower ability to update designs once etched onto silicon and higher costs of functional verification, times to market, and need for more parts. In the hierarchy of digital computing systems ranging from general-purpose processors to fully customized hardware, there is a tradeoff between flexibility and efficiency, with efficiency increasing by orders of magnitude when any given application is implemented higher up that hierarchy. This hierarchy includes general-purpose processors such as CPUs, more specialized processors such as programmable shaders in a GPU, fixed-function implemented on field-programmable gate arrays (FPGAs), and fixed-function implemented on application-specific integrated circuits (ASICs). Hardware acceleration is advantageous for performance, and practical when the functions are fixed so updates are not as ne" https://en.wikipedia.org/wiki/Photobiology,"Photobiology is the scientific study of the beneficial and harmful interactions of light (technically, non-ionizing radiation) in living organisms. The field includes the study of photophysics, photochemistry, photosynthesis, photomorphogenesis, visual processing, circadian rhythms, photomovement, bioluminescence, and ultraviolet radiation effects. The division between ionizing radiation and non-ionizing radiation is typically considered to be a photon energy greater than 10 eV, which approximately corresponds to both the first ionization energy of oxygen, and the ionization energy of hydrogen at about 14 eV. When photons come into contact with molecules, these molecules can absorb the energy in photons and become excited. Then they can react with molecules around them and stimulate ""photochemical"" and ""photophysical"" changes of molecular structures. Photophysics This area of Photobiology focuses on the physical interactions of light and matter. When molecules absorb photons that matches their energy requirements they promote a valence electron from a ground state to an excited state and they become a lot more reactive. This is an extremely fast process, but very important for different processes. Photochemistry This area of Photobiology studies the reactivity of a molecule when it absorbs energy that comes from light. It also studies what happens with this energy, it could be given off as heat or fluorescence so the molecule goes back to ground state. There are 3 basic laws of photochemistry: 1) First Law of Photochemistry: This law explains that in order for photochemistry to happen, light has to be absorbed. 2) Second Law of Photochemistry: This law explains that only one molecule will be activated by each photon that is absorbed. 3) Bunsen-Roscoe Law of Reciprosity: This law explains that the energy in the final products of a photochemical reaction will be directly proportional to the total energy that was initially absorbed by the system. Plant Photo" https://en.wikipedia.org/wiki/North%20American%20Network%20Operators%27%20Group,"The North American Network Operators' Group (NANOG) is an educational and operational forum for the coordination and dissemination of technical information related to backbone/enterprise networking technologies and operational practices. It runs meetings, talks, surveys, and an influential mailing list for Internet service providers. The main method of communication is the NANOG mailing list (known informally as nanog-l), a free mailing list to which anyone may subscribe or post. Meetings NANOG meetings are held three times each year, and include presentations, tutorials, and BOFs (Birds of a Feather meetings). There are also 'lightning talks', where speakers can submit brief presentations (no longer than 10 minutes), on a very short term. The meetings are informal, and membership is open. Conference participants typically include senior engineering staff from tier 1 and tier 2 ISPs. Participating researchers present short summaries of their work for operator feedback. In addition to the conferences, NANOG On the Road events offer single-day professional development and networking events touching on current NANOG discussion topics. Organization NANOG meetings are organized by NewNOG, Inc., a Delaware non-profit organization, which took over responsibility for NANOG from the Merit Network in February 2011. Meetings are hosted by NewNOG and other organizations from the U.S. and Canada. Overall leadership is provided by the NANOG Steering Committee, established in 2005, and a Program Committee. History NANOG evolved from the NSFNET ""Regional-Techs"" meetings, where technical staff from the regional networks met to discuss operational issues of common concern with each other and with the Merit engineering staff. At the February 1994 regional techs meeting in San Diego, the group revised its charter to include a broader base of network service providers, and subsequently adopted NANOG as its new name. NANOG was organized by Merit Network, a non-profit Michigan org" https://en.wikipedia.org/wiki/4D%20vector,"In computer science, a 4D vector is a 4-component vector data type. Uses include homogeneous coordinates for 3-dimensional space in computer graphics, and red green blue alpha (RGBA) values for bitmap images with a color and alpha channel (as such they are widely used in computer graphics). They may also represent quaternions (useful for rotations) although the algebra they define is different. Computer hardware support Some microprocessors have hardware support for 4D vectors with instructions dealing with 4 lane single instruction, multiple data (SIMD) instructions, usually with a 128-bit data path and 32-bit floating point fields. Specific instructions (e.g., 4 element dot product) may facilitate the use of one 128-bit register to represent a 4D vector. For example, in chronological order: Hitachi SH4, PowerPC VMX128 extension, and Intel x86 SSE4. Some 4-element vector engines (e.g., the PS2 vector units) went further with the ability to broadcast components as multiply sources, and cross product support. Earlier generations of graphics processing unit (GPU) shader pipelines used very long instruction word (VLIW) instruction sets tailored for similar operations. Software support SIMD use for 4D vectors can be conveniently wrapped in a vector maths library (commonly implemented in C or C++) commonly used in video game development, along with 4×4 matrix support. These are distinct from more general linear algebra libraries in other domains focussing on matrices of arbitrary size. Such libraries sometimes support 3D vectors padded to 4D or loading 3D data into 4D registers, with arithmetic mapped efficiently to SIMD operations by per platform intrinsic function implementations. There is choice between AOS and SOA approaches given the availability of 4 element registers, versus SIMD instructions that are usually tailored toward homogenous data. Shading languages for graphics processing unit (GPU) programming usually have a 4D datatypes (along with 2D, 3D) w" https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering,"Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning. Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted. Aa A represents: the first point of a triangle the digit ""10"" in hexadecimal and other positional numeral systems with a radix of 11 or greater the unit ampere for electric current in physics the area of a figure the mass number or nucleon number of an element in chemistry the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature a vector potential, in electromagnetics it can refer to the magnetic vector potential an Abelian group in abstract algebra the Glaisher–Kinkelin constant atomic weight, denoted by Ar work in classical mechanics the pre-exponential factor in the Arrhenius Equation electron affinity represents the algebraic numbers or affine space in algebraic geometry. A blood type A spectral type a represents: the first side of a triangle (opposite point A) the scale factor of the expanding universe in cosmology the acceleration in mechanics equations the first constant in a linear equation a constant in a polynomial the unit are for area (100 m2) the unit prefix atto (10−18) the first term in a sequence or series Reflectance Bb B represents: the digit ""11"" in hexadecimal and other positional numeral systems with a radix of 12 or greater the second point of a triangle a ball (also denoted by ℬ () or ) a basis of a vector space or of a filter (both also denoted by ℬ ()) in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial the magnetic field, denoted " https://en.wikipedia.org/wiki/Significand,"The significand (also mantissa or coefficient, sometimes also argument, or ambiguously fraction or characteristic) is part of a number in scientific notation or in floating-point representation, consisting of its significant digits. Depending on the interpretation of the exponent, the significand may represent an integer or a fraction. Example The number 123.45 can be represented as a decimal floating-point number with the integer 12345 as the significand and a 10−2 power term, also called characteristics, where −2 is the exponent (and 10 is the base). Its value is given by the following arithmetic: 123.45 = 12345 × 10−2. The same value can also be represented in normalized form with 1.2345 as the fractional coefficient, and +2 as the exponent (and 10 as the base): 123.45 = 1.2345 × 10+2. Schmid, however, called this representation with a significand ranging between 1.0 and 10 a modified normalized form. For base 2, this 1.xxxx form is also called a normalized significand. Finally, the value can be represented in the format given by the Language Independent Arithmetic standard and several programming language standards, including Ada, C, Fortran and Modula-2, as 123.45 = 0.12345 × 10+3. Schmid called this representation with a significand ranging between 0.1 and 1.0 the true normalized form. For base 2, this 0.xxxx form is also called a normed significand. Significands and the hidden bit For a normalized number, the most significant digit is always non-zero. When working in binary, this constraint uniquely determines this digit to always be 1; as such, it does not need to be explicitly stored, being called the hidden bit. The significand is characterized by its width in (binary) digits, and depending on the context, the hidden bit may or may not be counted towards the width of the significand. For example, the same IEEE 754 double-precision format is commonly described as having either a 53-bit significand, including the hidden bit, or a 52-bit s" https://en.wikipedia.org/wiki/Twelfth%20root%20of%20two,"The twelfth root of two or (or equivalently ) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone () in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = ). Numerical value The twelfth root of two to 20 significant figures is . Fraction approximations in increasing order of accuracy include , , , , and . , its numerical value has been computed to at least twenty billion decimal digits. The equal-tempered chromatic scale A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 2 times that of the one below it. Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches: The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher. Other tuning scales Other tuning scales use slightly different interval ratios: The just or Pythagorean perfect fifth is 3/2, and the difference between the equal tempered perfect fifth and the just is a grad, the twelfth root of the Pythagorean comma (). The equal tempered Bohlen–Pierce scale uses the interval of the thirteenth root of three (). Stockhausen's Studie II (1954) makes use of the twenty-fifth root of five (), a compound major third divided into 5×5 parts. The" https://en.wikipedia.org/wiki/Limosilactobacillus,"Limosilactobacillus is a thermophilic and heterofermentative genus of lactic acid bacteria created in 2020 by splitting from Lactobacillus. The name is derived from the Latin ""slimy"", referring to the property of most strains in the genus to produce exopolysaccharides from sucrose. The genus currently includes 31 species or subspecies, most of these were isolated from the intestinal tract of humans or animals. Limosilactobacillus reuteri has been used as a model organism to evaluate the host-adaptation of lactobacilli to the human and animal intestine and for the recruitment of intestinal lactobacilli for food fermentations. Limosilactobacilli are heterofermentative and produce lactate, CO2, and acetate or ethanol from glucose; several limosilactobacilli, particularly strains of Lm. reuteri convert glycerol or 1,2-propanediol to 1,3 propanediol or propanol, respectively. Most strains do not grow in presence of oxygen, or in de Man, Rogosa Sharpe (MRS) medium, the standard medium for cultivation of lactobacilli. Addition of maltose, cysteine and fructose to MRS is usually sufficient for cultivation of limosilactobacilli. Species The genus Limosilactobacillus comprises the following species: Limosilactobacillus agrestis Li et al. 2021 Limosilactobacillus albertensis Li et al. 2021 Limosilactobacillus alvi Zheng et al. 2020 Limosilactobacillus antri (Roos et al. 2005) Zheng et al. 2020 Limosilactobacillus balticus Li et al. 2021 Limosilactobacillus caviae (Killer et al. 2017) Zheng et al. 2020 Limosilactobacillus coleohominis (Nikolaitchouk et al. 2001) Zheng et al. 2020 Limosilactobacillus equigenerosi (Endo et al. 2008) Zheng et al. 2020 Limosilactobacillus fastidiosus Li et al. 2021 Limosilactobacillus fermentum (Beijerinck 1901) Zheng et al. 2020 Limosilactobacillus frumenti (Müller et al. 2000) Zheng et al. 2020 Limosilactobacillus gastricus (Roos et al. 2005) Zheng et al. 2020 Limosilactobacillus gorillae (Tsuchida et al. 2014) Zheng et al. 2020 " https://en.wikipedia.org/wiki/Sensor%20node,"A sensor node (also known as a mote in North America), consists of an individual node from a sensor network that is capable of performing a desired action such as gathering, processing or communicating information with other connected nodes in a network. History Although wireless sensor networks have existed for decades and used for diverse applications such as earthquake measurements or warfare, the modern development of small sensor nodes dates back to the 1998 Smartdust project and the NASA. Sensor Web One of the objectives of the Smartdust project was to create autonomous sensing and communication within a cubic millimeter of space, though this project ended early on, it led to many more research projects and major research centres such as The Berkeley NEST and CENS. The researchers involved in these projects coined the term mote to refer to a sensor node. The equivalent term in the NASA Sensor Webs Project for a physical sensor node is pod, although the sensor node in a Sensor Web can be another Sensor Web itself. Physical sensor nodes have been able to increase their effectiveness and its capability in conjunction with Moore's Law. The chip footprint contains more complex and lower powered microcontrollers. Thus, for the same node footprint, more silicon capability can be packed into it. Nowadays, motes focus on providing the longest wireless range (dozens of km), the lowest energy consumption (a few uA) and the easiest development process for the user. Components The main components of a sensor node usually involve a microcontroller, transceiver, external memory, power source and one or more sensors. Sensors Sensors are used by wireless sensor nodes to capture data from their environment. They are hardware devices that produce a measurable response to a change in a physical condition like temperature or pressure. Sensors measure physical data of the parameter to be monitored and have specific characteristics such as accuracy, sensitivity etc. The cont" https://en.wikipedia.org/wiki/Compass%20%28drawing%20tool%29,"A compass, more accurately known as a pair of compasses, is a technical drawing instrument that can be used for inscribing circles or arcs. As dividers, it can also be used as a tool to mark out distances, in particular, on maps. Compasses can be used for mathematics, drafting, navigation and other purposes. Prior to computerization, compasses and other tools for manual drafting were often packaged as a set with interchangeable parts. By the mid-twentieth century, circle templates supplemented the use of compasses. Today those facilities are more often provided by computer-aided design programs, so the physical tools serve mainly a didactic purpose in teaching geometry, technical drawing, etc. Construction and parts Compasses are usually made of metal or plastic, and consist of two ""legs"" connected by a hinge which can be adjusted to allow changing of the radius of the circle drawn. Typically one leg has a spike at its end for anchoring, and the other leg holds a drawing tool, such as a pencil, a short length of just pencil lead or sometimes a pen. Handle The handle, a small knurled rod above the hinge, is usually about half an inch long. Users can grip it between their pointer finger and thumb. Legs There are two types of leg in a pair of compasses: the straight or the steady leg and the adjustable one. Each has a separate purpose; the steady leg serves as the basis or support for the needle point, while the adjustable leg can be altered in order to draw different sizes of circles. Hinge The screw through the hinge holds the two legs in position. The hinge can be adjusted, depending on desired stiffness; the tighter the hinge-screw, the more accurate the compass's performance. The better quality compass, made of plated metal, is able to be finely adjusted via a small, serrated wheel usually set between the legs (see the ""using a compass"" animation shown above) and it has a (dangerously powerful) spring encompassing the hinge. This sort of compass is often kno" https://en.wikipedia.org/wiki/Otis%20Boykin,"Otis Frank Boykin (August 29, 1920March 26, 1982) was an American inventor and engineer. His inventions include electrical resistors used in computing, missile guidance, and pacemakers. Early life and education Otis Boykin was born on August 29, 1920, in Dallas, Texas. His father, Walter B. Boykin, was a carpenter, and later became a preacher. His mother, Sarah, was a maid, who died of heart failure when Otis was a year old. This inspired him to help improve the pacemaker. Boykin attended Booker T. Washington High School in Dallas, where he was the valedictorian, graduating in 1938. He attended Fisk University on a scholarship, worked as a laboratory assistant at the university's nearby aerospace laboratory, and left in 1941. Boykin then moved to Chicago, where he found work as a clerk at Electro Manufacturing Company. He was subsequently hired as a laboratory assistant for the Majestic Radio and Television Corporation; at that company, he rose to become foreman of their factory. By 1944, he was working for the P.J. Nilsen Research Labs. In 1946–1947, he studied at Illinois Institute of Technology, but dropped out after two years; some sources say it was because he could not afford his tuition, but he later stated he left for an employment opportunity and did not have time to return to finish his degree. One of his mentors was Dr. Denton Deere, an engineer and inventor with his own laboratory. Another mentor was Dr. Hal F. Fruth, with whom he collaborated on several experiments, including a more effective way to test automatic pilot control units in airplanes. The two men later went into business, opening an electronics research lab in the late 1940s. In the 1950s, Boykin and Fruth worked together at the Monson Manufacturing Corporation; Boykin was the company's chief engineer. In the early 1960s, Boykin was a senior project engineer at the Chicago Telephone Supply Corporation, later known as CTS Labs. It was here that he did much of his pacemaker research. But" https://en.wikipedia.org/wiki/Pyrrolizidine%20alkaloid%20sequestration,"Pyrrolizidine alkaloid sequestration by insects is a strategy to facilitate defense and mating. Various species of insects have been known to use molecular compounds from plants for their own defense and even as their pheromones or precursors to their pheromones. A few Lepidoptera have been found to sequester chemicals from plants which they retain throughout their life and some members of Erebidae are examples of this phenomenon. Starting in the mid-twentieth century researchers investigated various members of Arctiidae, and how these insects sequester pyrrolizidine alkaloids (PAs) during their life stages, and use these chemicals as adults for pheromones or pheromone precursors. PAs are also used by members of the Arctiidae for defense against predators throughout the life of the insect. Overview Pyrrolizidine alkaloids are a group of chemicals produced by plants as secondary metabolites, all of which contain a pyrrolizidine nucleus. This nucleus is made up of two pyrrole rings bonded by one carbon and one nitrogen. There are two forms in which PAs can exist and will readily interchange between: a pro-toxic free base form, also called a tertiary amine, or in a non-toxic form of N-oxide. Researchers have collected data that strongly suggests that PAs can be registered by taste receptors of predators, acting as a deterrent from being ingested. Taste receptors are also used by the various moth species that sequester PAs, which often stimulates them to feed. As of 2005, all of the PA sequestering insects that have been studied have all evolved a system to keep concentrations of the PA pro-toxic form low within the insect's tissues. Researchers have found a number of Arctiidae that use PAs for protection and for male pheromones or precursors of the male pheromones, and some studies have found evidence suggesting PAs have behavioral and developmental effects. Estigmene acrea, Cosmosoma myrodora, Utetheisa ornatrix, Creatonotos gangis and Creatonotos transiens are all" https://en.wikipedia.org/wiki/Immunofluorescence,"Immunofluorescence is a technique used for light microscopy with a fluorescence microscope and is used primarily on biological samples. This technique uses the specificity of antibodies to their antigen to target fluorescent dyes to specific biomolecule targets within a cell, and therefore allows visualization of the distribution of the target molecule through the sample. The specific region an antibody recognizes on an antigen is called an epitope. There have been efforts in epitope mapping since many antibodies can bind the same epitope and levels of binding between antibodies that recognize the same epitope can vary. Additionally, the binding of the fluorophore to the antibody itself cannot interfere with the immunological specificity of the antibody or the binding capacity of its antigen. Immunofluorescence is a widely used example of immunostaining (using antibodies to stain proteins) and is a specific example of immunohistochemistry (the use of the antibody-antigen relationship in tissues). This technique primarily makes use of fluorophores to visualise the location of the antibodies. Immunofluorescence can be used on tissue sections, cultured cell lines, or individual cells, and may be used to analyze the distribution of proteins, glycans, and small biological and non-biological molecules. This technique can even be used to visualize structures such as intermediate-sized filaments. If the topology of a cell membrane has yet to be determined, epitope insertion into proteins can be used in conjunction with immunofluorescence to determine structures. Immunofluorescence can also be used as a ""semi-quantitative"" method to gain insight into the levels and localization patterns of DNA methylation since it is a more time-consuming method than true quantitative methods and there is some subjectivity in the analysis of the levels of methylation. Immunofluorescence can be used in combination with other, non-antibody methods of fluorescent staining, for example, use of " https://en.wikipedia.org/wiki/Bob%20Pease,"Robert Allen Pease (August 22, 1940 – June 18, 2011) was an electronics engineer known for analog integrated circuit (IC) design, and as the author of technical books and articles about electronic design. He designed several very successful ""best-seller"" ICs, many of them in continuous production for multiple decades.These include LM331 voltage-to-frequency converter, and the LM337 adjustable negative voltage regulator (complement to the LM317). Life and career Pease was born on August 22, 1940, in Rockville, Connecticut. He attended Northfield Mount Hermon School in Massachusetts, and subsequently obtained a Bachelor of Science in Electrical Engineering (BSEE) degree from Massachusetts Institute of Technology in 1961. He started work in the early 1960s at George A. Philbrick Researches (GAP-R). GAP-R pioneered the first reasonable-cost, mass-produced operational amplifier (op-amp), the K2-W. At GAP-R, Pease developed many high-performance op-amps, built with discrete solid-state components. In 1976, Pease moved to National Semiconductor Corporation (NSC) as a Design and Applications Engineer, where he began designing analog monolithic ICs, as well as design reference circuits using these devices. He had advanced to Staff Engineer by the time of his departure in 2009. During his tenure at NSC, he began writing a popular continuing monthly column called ""Pease Porridge"" in Electronic Design about his experiences in the world of electronic design and application. The last project Pease worked on was the THOR-LVX (photo-nuclear) microtron Advanced Explosives contraband Detection System: ""A Dual-Purpose Ion-Accelerator for Nuclear-Reaction-Based Explosives-and SNM-Detection in Massive Cargo"". Pease was the author of eight books, including Troubleshooting Analog Circuits, and he held 21 patents. Although his name was listed as ""Robert A. Pease"" in formal documents, he preferred to be called ""Bob Pease"" or to use his initials ""RAP"" in his magazine columns. His other" https://en.wikipedia.org/wiki/List%20of%20knot%20theory%20topics,"Knot theory is the study of mathematical knots. While inspired by knots which appear in daily life in shoelaces and rope, a mathematician's knot differs in that the ends are joined so that it cannot be undone. In precise mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, R3. Two mathematical knots are equivalent if one can be transformed into the other via a deformation of R3 upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting the string or passing the string through itself. History Knots, links, braids Knot (mathematics) gives a general introduction to the concept of a knot. Two classes of knots: torus knots and pretzel knots Cinquefoil knot also known as a (5, 2) torus knot. Figure-eight knot (mathematics) the only 4-crossing knot Granny knot (mathematics) and Square knot (mathematics) are a connected sum of two Trefoil knots Perko pair, two entries in a knot table that were later shown to be identical. Stevedore knot (mathematics), a prime knot with crossing number 6 Three-twist knot is the twist knot with three-half twists, also known as the 52 knot. Trefoil knot A knot with crossing number 3 Unknot Knot complement, a compact 3 manifold obtained by removing an open neighborhood of a proper embedding of a tame knot from the 3-sphere. Knots and graphs general introduction to knots with mention of Reidemeister moves Notation used in knot theory: Conway notation Dowker–Thistlethwaite notation (DT notation) Gauss code (see also Gauss diagrams) continued fraction regular form General knot types 2-bridge knot Alternating knot; a knot that can be represented by an alternating diagram (i.e. the crossing alternate over and under as one traverses the knot). Berge knot a class of knots related to Lens space surgeries and defined in terms of their properties with respect to a genus 2 Heegaard surface. Cable knot, see Sate" https://en.wikipedia.org/wiki/Through-silicon%20via,"In electronic engineering, a through-silicon via (TSV) or through-chip via is a vertical electrical connection (via) that passes completely through a silicon wafer or die. TSVs are high-performance interconnect techniques used as an alternative to wire-bond and flip chips to create 3D packages and 3D integrated circuits. Compared to alternatives such as package-on-package, the interconnect and device density is substantially higher, and the length of the connections becomes shorter. Classification Dictated by the manufacturing process, there exist three different types of TSVs: via-first TSVs are fabricated before the individual component (transistors, capacitors, resistors, etc.) are patterned (front end of line, FEOL), via-middle TSVs are fabricated after the individual component are patterned but before the metal layers (back-end-of-line, BEOL), and via-last TSVs are fabricated after (or during) the BEOL process. Via-middle TSVs are currently a popular option for advanced 3D ICs as well as for interposer stacks. TSVs through the front end of line (FEOL) have to be carefully accounted for during the EDA and manufacturing phases. That is because TSVs induce thermo-mechanical stress in the FEOL layer, thereby impacting the transistor behaviour. Applications Image sensors CMOS image sensors (CIS) were among the first applications to adopt TSV(s) in volume manufacturing. In initial CIS applications, TSVs were formed on the backside of the image sensor wafer to form interconnects, eliminate wire bonds, and allow for reduced form factor and higher-density interconnects. Chip stacking came about only with the advent of backside illuminated (BSI) CIS, and involved reversing the order of the lens, circuitry, and photodiode from traditional front-side illumination so that the light coming through the lens first hits the photodiode and then the circuitry. This was accomplished by flipping the photodiode wafer, thinning the backside, and then bonding it on top of the re" https://en.wikipedia.org/wiki/Wavelet%20transform,"In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. Definition A function is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space of square integrable functions. The Hilbert basis is constructed as the family of functions by means of dyadic translations and dilations of , for integers . If under the standard inner product on , this family is orthonormal, it is an orthonormal system: where is the Kronecker delta. Completeness is satisfied if every function may be expanded in the basis as with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual. The integral wavelet transform is the integral transform defined as The wavelet coefficients are then given by Here, is called the binary dilation or dyadic dilation, and is the binary or dyadic position. Principle The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, where represents time and angular frequency (, where is ordinary frequency). The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of . When is large, Bad time resolution Good frequency resolution Low frequency, large scaling factor When is small Good time" https://en.wikipedia.org/wiki/Icophone,"The icophone is an instrument of speech synthesis conceived by Émile Leipp in 1964 and used for synthesizing the French language. The two first icophones were made in the laboratory of physical mechanics of Saint-Cyr-l'École. The principle of the icophone is the representation of the sound by a spectrograph. The spectrogram analyzes a word, a phrase, or more generally a sound, and shows the distribution of the different frequencies with their relative intensities. The first machines to synthesize words were made by displaying the form of the spectrogram on a transparent tape, which controls a series of oscillators following the presence or absence of a black mark on the tape. Leipp succeeded in decomposing the segments of a spoken sound phenomenon, and in synthesizing them from a very simplified display." https://en.wikipedia.org/wiki/List%20of%20algebraic%20topology%20topics,"This is a list of algebraic topology topics. Homology (mathematics) Simplex Simplicial complex Polytope Triangulation Barycentric subdivision Simplicial approximation theorem Abstract simplicial complex Simplicial set Simplicial category Chain (algebraic topology) Betti number Euler characteristic Genus Riemann–Hurwitz formula Singular homology Cellular homology Relative homology Mayer–Vietoris sequence Excision theorem Universal coefficient theorem Cohomology List of cohomology theories Cocycle class Cup product Cohomology ring De Rham cohomology Čech cohomology Alexander–Spanier cohomology Intersection cohomology Lusternik–Schnirelmann category Poincaré duality Fundamental class Applications Jordan curve theorem Brouwer fixed point theorem Invariance of domain Lefschetz fixed-point theorem Hairy ball theorem Degree of a continuous mapping Borsuk–Ulam theorem Ham sandwich theorem Homology sphere Homotopy theory Homotopy Path (topology) Fundamental group Homotopy group Seifert–van Kampen theorem Pointed space Winding number Simply connected Universal cover Monodromy Homotopy lifting property Mapping cylinder Mapping cone (topology) Wedge sum Smash product Adjunction space Cohomotopy Cohomotopy group Brown's representability theorem Eilenberg–MacLane space Fibre bundle Möbius strip Line bundle Canonical line bundle Vector bundle Associated bundle Fibration Hopf bundle Classifying space Cofibration Homotopy groups of spheres Plus construction Whitehead theorem Weak equivalence Hurewicz theorem H-space Further developments Künneth theorem De Rham cohomology Obstruction theory Characteristic class Chern class Chern–Simons form Pontryagin class Pontryagin number Stiefel–Whitney class Poincaré conjecture Cohomology operation Steenrod algebra Bott periodicity theorem K-theory Topological K-theory Adams operation Algebraic K-theory Whitehead torsion Twisted K-theory Cobordism Thom space Suspension functor Stable homotopy theory Spectrum (homotopy theory) Morava K-the" https://en.wikipedia.org/wiki/Reference%20model,"A reference model—in systems, enterprise, and software engineering—is an abstract framework or domain-specific ontology consisting of an interlinked set of clearly defined concepts produced by an expert or body of experts to encourage clear communication. A reference model can represent the component parts of any consistent idea, from business functions to system components, as long as it represents a complete set. This frame of reference can then be used to communicate ideas clearly among members of the same community. Reference models are often illustrated as a set of concepts with some indication of the relationships between the concepts. Overview According to OASIS (Organization for the Advancement of Structured Information Standards) a reference model is ""an abstract framework for understanding significant relationships among the entities of some environment, and for the development of consistent standards or specifications supporting that environment. A reference model is based on a small number of unifying concepts and may be used as a basis for education and explaining standards to a non-specialist. A reference model is not directly tied to any standards, technologies or other concrete implementation details, but it does seek to provide a common semantics that can be used unambiguously across and between different implementations."" There are a number of concepts rolled up into that of a 'reference model.' Each of these concepts is important: Abstract: a reference model is abstract. It provides information about environments of a certain kind. A reference model describes the type or kind of entities that may occur in such an environment, not the particular entities that actually do occur in a specific environment. For example, when describing the architecture of a particular house (which is a specific environment of a certain kind), an actual exterior wall may have dimensions and materials, but the concept of a wall (type of entity) is part of the" https://en.wikipedia.org/wiki/Bare%20machine%20computing,"Bare Machine Computing (BMC) is a computer architecture based on bare machines. In the BMC paradigm, applications run without the support of any operating system (OS) or centralized kernel i.e., no intermediary software is loaded on the bare machine prior to running applications. The applications, which are called bare machine applications or simply BMC applications, do not use any persistent storage or a hard disk, and instead are stored on detachable mass storage such as a USB flash drive. A BMC program consists of a single application or a small set of applications (application suite) that runs as a single executable within one address space. BMC applications have direct access to the necessary hardware resources. They are self-contained, self-managed and self-controlled entities that boot, load and run without using any other software components or external software. BMC applications have inherent security due to their design. There are no OS-related vulnerabilities, and each application only contains the necessary (minimal) functionality. There is no privileged mode in a BMC system since applications only run in user mode. Also, application code is statically compiled-there is no means to dynamically alter BMC program flow during execution. History In the early days of computing, computer applications directly communicated to the hardware and there was no operating system. As applications grew larger encompassing various domains, OSes were invented. They served as middleware providing hardware abstractions to applications. OSes have grown immensely in their size and complexity resulting in attempts to reduce OS overhead and improve performance including Microkernel, Exokernel, Tiny-OS, OS-Kit, Palacios and Kitten, IO_Lite, bare-metal Linux, IBM-Libra and other lean kernels. In addition to the above approaches, in embedded systems such as smart phones, a small and dedicated portion of an OS and a given set of applications are closely integrated with the hardw" https://en.wikipedia.org/wiki/National%20Law%20Enforcement%20System,"The National Law Enforcement System, better known as the Wanganui Computer, was a database set up in 1976 by the State Services Commission in Wanganui, New Zealand. It held information which could be accessed by New Zealand Police, Land Transport Safety Authority and the justice department. The Wanganui computer was a Sperry mainframe computer built to hold records such as criminal convictions and car and gun licences. At the time it was deemed ground-breaking, with Minister of Police, Allan McCready, describing it as ""probably the most significant crime-fighting weapon ever brought to bear against lawlessness in this country"". Seen by many as a Big Brother initiative, the database was controversial, attracting numerous protests from libertarians with concerns over privacy. The most notable event was in 1982, when self-described anarchist punk Neil Roberts, aged 22, detonated a home-made gelignite bomb upon his person at the gates of the centre, making him New Zealand's highest-profile suicide bomber. The blast was large enough to be heard around Wanganui, and Roberts was killed instantly, being later identified by his unique chest tattoo bearing the words ""This punk won't see 23. No future."" The centre survived this and other protests until the 1990s when the operation was transferred to Auckland, although this new system has retained its Wanganui moniker. The original database, having lasted 30 years and growing increasingly outdated, was finally shut down in June 2005, with the responsibility being successfully handed over to Auckland at the National Intelligence Application (also known as NIA). The building, known as 'Wairere House' was later occupied by the National Library of New Zealand and contained newspaper archives. See also INCIS" https://en.wikipedia.org/wiki/Tillie%20the%20All-Time%20Teller,"Tillie the All-Time Teller was one of the first ATMs, run by the First National Bank of Atlanta and considered to be one of the most successful ATMs in the banking industry. Tillie the All-Time Teller had a picture of a smiling blonde girl on the front of the machine to suggest it was user-friendly, had an apparent personality, and could greet people by name. Many banks hired women dressed as this person to show their customers how to use Tillie the All-Time Teller. History It was introduced by the First National Bank of Atlanta on May 15, 1974. It started out at only eleven locations. They were in commerce starting May 20, 1974. Starting 1977, other banks purchased rights to use Tillie the All-Time Teller as their ATM system. By March 21, 1981, they were available at 70 locations, including on a college campus. On October 15, 2013, Susan Bennett revealed that she played the voice for Tillie the All-Time Teller, noting that she ""started [her] life as a machine quite young."" Appearance Tillie the All-Time Teller machines were red and gold to make them look more attractive. On the bottom left was the place to enter an ""access card,"" which featured a cartoon character. Above that was a place to enter a ""secret code"" that the customer chose. On the bottom center was a picture of a cartoon blonde girl with china-blue eyes and a red hat. Above that was the place it handed out cash and coins. On the top right was the place to enter a desired amount of money. How it worked Customers could use Tillie the All-Time Teller by following these steps: Inserting an ""Alltime Tellercard"" Following instructions presented on its TV screen Entering a ""secret code"" and entering a desired amount of money on the ""money keyboard"" ($200 was the limit) The machine would automatically hand out the desired amount of money. Entering a transaction envelope into the deposit slot Advertising There were a variety of advertisements made by the First National Bank of Atlanta in order to pr" https://en.wikipedia.org/wiki/Content%20delivery%20platform,"A content delivery platform (CDP) is a software as a service (SaaS) content service, similar to a content management system (CMS), that utilizes embedded software code to deliver web content. Instead of the installation of software on client servers, a CDP feeds content through embedded code snippets, typically via JavaScript widget, Flash widget or server-side Ajax. Content delivery platforms are not content delivery networks, which are utilized for large web media and do not depend on embedded software code. A CDP is utilized for all types of web content, even text-based content. Alternatively, a content delivery platform can be utilized to import a variety of syndicated content into one central location and then re-purposed for web syndication. The term content delivery platform was coined by Feed.Us software architect John Welborn during a presentation to the Chicago Web Developers Association. In late 2007, two blog comment services launched utilizing CDP-based services. Intense Debate and Disqus both employ JavaScript widgets to display and collect blog comments on websites. See also Web content management system Viddler, YouTube, Ustream embeddable streaming video" https://en.wikipedia.org/wiki/Hardware%20Trojan,"A Hardware Trojan (HT) is a malicious modification of the circuitry of an integrated circuit. A hardware Trojan is completely characterized by its physical representation and its behavior. The payload of an HT is the entire activity that the Trojan executes when it is triggered. In general, Trojans try to bypass or disable the security fence of a system: for example, leaking confidential information by radio emission. HTs also could disable, damage or destroy the entire chip or components of it. Hardware Trojans may be introduced as hidden ""Front-doors"" that are inserted while designing a computer chip, by using a pre-made application-specific integrated circuit (ASIC) semiconductor intellectual property core (IP Core) that have been purchased from a non-reputable source, or inserted internally by a rogue employee, either acting on their own, or on behalf of rogue special interest groups, or state sponsored spying and espionage. One paper published by IEEE in 2015 explains how a hardware design containing a Trojan could leak a cryptographic key leaked over an antenna or network connection, provided that the correct ""easter egg"" trigger is applied to activate the data leak. In high security governmental IT departments, hardware Trojans are a well known problem when buying hardware such as: a KVM switch, keyboards, mice, network cards, or other network equipment. This is especially the case when purchasing such equipment from non-reputable sources that could have placed hardware Trojans to leak keyboard passwords, or provide remote unauthorized entry. Background In a diverse global economy, outsourcing of production tasks is a common way to lower a product's cost. Embedded hardware devices are not always produced by the firms that design and/or sell them, nor in the same country where they will be used. Outsourced manufacturing can raise doubt about the evidence for the integrity of the manufactured product (i.e., one's certainty that the end-product has no desig" https://en.wikipedia.org/wiki/List%20of%20scientific%20constants%20named%20after%20people,"This is a list of physical and mathematical constants named after people. Eponymous constants and their influence on scientific citations have been discussed in the literature. Apéry's constant – Roger Apéry Archimedes' constant (, pi) – Archimedes Avogadro constant – Amedeo Avogadro Balmer's constant – Johann Jakob Balmer Belphegor's prime – Belphegor (demon) Bohr magneton – Niels Bohr Bohr radius – Niels Bohr Boltzmann constant – Ludwig Boltzmann Brun's constant – Viggo Brun Cabibbo angle – Nicola Cabibbo Chaitin's constant – Gregory Chaitin Champernowne constant – D. G. Champernowne Chandrasekhar limit – Subrahmanyan Chandrasekhar Copeland–Erdős constant – Paul Erdős and Peter Borwein Coulomb constant (electric force constant, electrostatic constant, ) – Charles-Augustin de Coulomb Eddington number – Arthur Stanley Eddington Dunbar's number – Robin Dunbar Embree–Trefethen constant Erdős–Borwein constant Euler–Mascheroni constant () – Leonhard Euler and Lorenzo Mascheroni Euler's number () – Leonhard Euler Faraday constant – Michael Faraday Feigenbaum constants – Mitchell Feigenbaum Fermi coupling constant – Enrico Fermi Gauss's constant – Carl Friedrich Gauss Graham's number – Ronald Graham Hartree energy – Douglas Hartree Hubble constant – Edwin Hubble Josephson constant – Brian David Josephson Kaprekar's constant – D. R. Kaprekar Kerr constant – John Kerr Khinchin's constant – Aleksandr Khinchin Landau–Ramanujan constant – Edmund Landau and Srinivasa Ramanujan Legendre's constant (one, 1) – Adrien-Marie Legendre Loschmidt constant – Johann Josef Loschmidt Ludolphsche Zahl – Ludolph van Ceulen Mean of Phidias (golden ratio, , phi) – Phidias Meissel–Mertens constant Moser's number Newtonian constant of gravitation (gravitational constant, ) – Sir Isaac Newton Planck constant () – Max Planck Reduced Planck constant or Dirac constant (-bar, ) – Max Planck, Paul Dirac Ramanujan–Soldner constant – Srinivasa Ramanujan and Jo" https://en.wikipedia.org/wiki/Mathematical%20knowledge%20management,"Mathematical knowledge management (MKM) is the study of how society can effectively make use of the vast and growing literature on mathematics. It studies approaches such as databases of mathematical knowledge, automated processing of formulae and the use of semantic information, and artificial intelligence. Mathematics is particularly suited to a systematic study of automated knowledge processing due to the high degree of interconnectedness between different areas of mathematics. See also OMDoc QED manifesto Areas of mathematics MathML External links www.nist.gov/mathematical-knowledge-management, NIST's MKM page The MKM Interest Group (archived) 9th International Conference on MKM, Paris, France, 2010 Big Proof Conference , a programme at the Isaac Newton Institute directed at the challenges of bringing proof technology into mainstream mathematical practice. Big Proof Two Mathematics and culture Information science" https://en.wikipedia.org/wiki/Networking%20hardware,"Networking hardware, also known as network equipment or computer networking devices, are electronic devices that are required for communication and interaction between devices on a computer network. Specifically, they mediate data transmission in a computer network. Units which are the last receiver or generate data are called hosts, end systems or data terminal equipment. Range Networking devices includes a broad range of equipment which can be classified as core network components which interconnect other network components, hybrid components which can be found in the core or border of a network and hardware or software components which typically sit on the connection point of different networks. The most common kind of networking hardware today is a copper-based Ethernet adapter which is a standard inclusion on most modern computer systems. Wireless networking has become increasingly popular, especially for portable and handheld devices. Other networking hardware used in computers includes data center equipment (such as file servers, database servers and storage areas), network services (such as DNS, DHCP, email, etc.) as well as devices which assure content delivery. Taking a wider view, mobile phones, tablet computers and devices associated with the internet of things may also be considered networking hardware. As technology advances and IP-based networks are integrated into building infrastructure and household utilities, network hardware will become an ambiguous term owing to the vastly increasing number of network-capable endpoints. Specific devices Network hardware can be classified by its location and role in the network. Core Core network components interconnect other network components. Gateway: an interface providing a compatibility between networks by converting transmission speeds, protocols, codes, or security measures. Router: a networking device that forwards data packets between computer networks. Routers perform the ""traffic directing"" fun" https://en.wikipedia.org/wiki/CodeSynthesis%20XSD/e,"CodeSynthesis XSD/e is a validating XML parser/serializer and C++ XML Data Binding generator for Mobile and Embedded systems. It is developed by Code Synthesis and dual-licensed under the GNU GPL and a proprietary license. Given an XML instance specification (XML Schema), XSD/e can produce three kinds of C++ mappings: Embedded C++/Parser for event-driven XML parsing, Embedded C++/Serializer for event-driven XML serialization, and Embedded C++/Hybrid which provides a light-weight, in-memory object model on top of the other two mappings. The C++/Hybrid mapping generates C++ classes for types defined in XML Schema as well as parsing and serialization code. The C++ classes represent the data stored in XML as a statically-typed, tree-like object model and support fully in-memory as well as partially in-memory/partially event-driven XML processing. The C++/Parser mapping generates validating C++ parser skeletons for data types defined in XML Schema. One can then implement these parser skeletons to build a custom in-memory representation or perform immediate processing as parts of the XML documents become available. Similarly, the Embedded C++/Serializer mapping generates validating C++ serializer skeletons for types defined in XML Schema which can be used to serialize application data to XML. CodeSynthesis XSD/e itself is written in C++ and supports a number of embedded targets include Embedded Linux, VxWorks, QNX, LynxOS, iPhone OS and Windows CE." https://en.wikipedia.org/wiki/List%20of%20particles,"This is a list of known and hypothesized particles. Standard Model elementary particles Elementary particles are particles with no measurable internal structure; that is, it is unknown whether they are composed of other particles. They are the fundamental objects of quantum field theory. Many families and sub-families of elementary particles exist. Elementary particles are classified according to their spin. Fermions have half-integer spin while bosons have integer spin. All the particles of the Standard Model have been experimentally observed, including the Higgs boson in 2012. Many other hypothetical elementary particles, such as the graviton, have been proposed, but not observed experimentally. Fermions Fermions are one of the two fundamental classes of particles, the other being bosons. Fermion particles are described by Fermi–Dirac statistics and have quantum numbers described by the Pauli exclusion principle. They include the quarks and leptons, as well as any composite particles consisting of an odd number of these, such as all baryons and many atoms and nuclei. Fermions have half-integer spin; for all known elementary fermions this is . All known fermions except neutrinos, are also Dirac fermions; that is, each known fermion has its own distinct antiparticle. It is not known whether the neutrino is a Dirac fermion or a Majorana fermion. Fermions are the basic building blocks of all matter. They are classified according to whether they interact via the strong interaction or not. In the Standard Model, there are 12 types of elementary fermions: six quarks and six leptons. Quarks Quarks are the fundamental constituents of hadrons and interact via the strong force. Quarks are the only known carriers of fractional charge, but because they combine in groups of three quarks (baryons) or in pairs of one quark and one antiquark (mesons), only integer charge is observed in nature. Their respective antiparticles are the antiquarks, which are identical except th" https://en.wikipedia.org/wiki/Mathematical%20object,"A mathematical object is an abstract concept arising in mathematics. In the usual language of mathematics, an object is anything that has been (or could be) formally defined, and with which one may do deductive reasoning and mathematical proofs. Typically, a mathematical object can be a value that can be assigned to a variable, and therefore can be involved in formulas. Commonly encountered mathematical objects include numbers, sets, functions, expressions, geometric objects, transformations of other mathematical objects, and spaces. Mathematical objects can be very complex; for example, theorems, proofs, and even theories are considered as mathematical objects in proof theory. The ontological status of mathematical objects has been the subject of much investigation and debate by philosophers of mathematics. List of mathematical objects by branch Number theory numbers, operations Combinatorics permutations, derangements, combinations Set theory sets, set partitions functions, and relations Geometry points, lines, line segments, polygons (triangles, squares, pentagons, hexagons, ...), circles, ellipses, parabolas, hyperbolas, polyhedra (tetrahedrons, cubes, octahedrons, dodecahedrons, icosahedrons), spheres, ellipsoids, paraboloids, hyperboloids, cylinders, cones. Graph theory graphs, trees, nodes, edges Topology topological spaces and manifolds. Linear algebra scalars, vectors, matrices, tensors. Abstract algebra groups, rings, modules, fields, vector spaces, group-theoretic lattices, and order-theoretic lattices. Categories are simultaneously homes to mathematical objects and mathematical objects in their own right. In proof theory, proofs and theorems are also mathematical objects. See also Abstract object Mathematical structure" https://en.wikipedia.org/wiki/Back-and-forth%20method,"In mathematical logic, especially set theory and model theory, the back-and-forth method is a method for showing isomorphism between countably infinite structures satisfying specified conditions. In particular it can be used to prove that any two countably infinite densely ordered sets (i.e., linearly ordered in such a way that between any two members there is another) without endpoints are isomorphic. An isomorphism between linear orders is simply a strictly increasing bijection. This result implies, for example, that there exists a strictly increasing bijection between the set of all rational numbers and the set of all real algebraic numbers. any two countably infinite atomless Boolean algebras are isomorphic to each other. any two equivalent countable atomic models of a theory are isomorphic. the Erdős–Rényi model of random graphs, when applied to countably infinite graphs, almost surely produces a unique graph, the Rado graph. any two many-complete recursively enumerable sets are recursively isomorphic. Application to densely ordered sets As an example, the back-and-forth method can be used to prove Cantor's isomorphism theorem, although this was not Georg Cantor's original proof. This theorem states that two unbounded countable dense linear orders are isomorphic. Suppose that (A, ≤A) and (B, ≤B) are linearly ordered sets; They are both unbounded, in other words neither A nor B has either a maximum or a minimum; They are densely ordered, i.e. between any two members there is another; They are countably infinite. Fix enumerations (without repetition) of the underlying sets: A = { a1, a2, a3, ... }, B = { b1, b2, b3, ... }. Now we construct a one-to-one correspondence between A and B that is strictly increasing. Initially no member of A is paired with any member of B. (1) Let i be the smallest index such that ai is not yet paired with any member of B. Let j be some index such that bj is not yet paired with any member of A and ai can be paired wi" https://en.wikipedia.org/wiki/Correlation%20coefficient,"A correlation coefficient is a numerical measure of some type of correlation, meaning a statistical relationship between two variables. The variables may be two columns of a given data set of observations, often called a sample, or two components of a multivariate random variable with a known distribution. Several types of correlation coefficient exist, each with their own definition and own range of usability and characteristics. They all assume values in the range from −1 to +1, where ±1 indicates the strongest possible agreement and 0 the strongest possible disagreement. As tools of analysis, correlation coefficients present certain problems, including the propensity of some types to be distorted by outliers and the possibility of incorrectly being used to infer a causal relationship between the variables (for more, see Correlation does not imply causation). Types There are several different measures for the degree of correlation in data, depending on the kind of data: principally whether the data is a measurement, ordinal, or categorical. Pearson The Pearson product-moment correlation coefficient, also known as , , or Pearson's , is a measure of the strength and direction of the linear relationship between two variables that is defined as the covariance of the variables divided by the product of their standard deviations. This is the best-known and most commonly used type of correlation coefficient. When the term ""correlation coefficient"" is used without further qualification, it usually refers to the Pearson product-moment correlation coefficient. Intra-class Intraclass correlation (ICC) is a descriptive statistic that can be used, when quantitative measurements are made on units that are organized into groups; it describes how strongly units in the same group resemble each other. Rank Rank correlation is a measure of the relationship between the rankings of two variables, or two rankings of the same variable: Spearman's rank correlation coefficient is " https://en.wikipedia.org/wiki/Motronic,"Motronic is the trade name given to a range of digital engine control units developed by Robert Bosch GmbH (commonly known as Bosch) which combined control of fuel injection and ignition in a single unit. By controlling both major systems in a single unit, many aspects of the engine's characteristics (such as power, fuel economy, drivability, and emissions) can be improved. Motronic 1.x Motronic M1.x is powered by various i8051 derivatives made by Siemens, usually SAB80C515 or SAB80C535. Code/data is stored in DIL or PLCC EPROM and ranges from 32k to 128k. 1.0 Often known as ""Motronic basic"", Motronic ML1.x was one of the first digital engine-management systems developed by Bosch. These early Motronic systems integrated the spark timing element with then-existing Jetronic fuel injection technology. It was originally developed and first used in the BMW 7 Series, before being implemented on several Volvo and Porsche engines throughout the 1980s. The components of the Motronic ML1.x systems for the most part remained unchanged during production, although there are some differences in certain situations. The engine control module (ECM) receives information regarding engine speed, crankshaft angle, coolant temperature and throttle position. An air flow meter also measures the volume of air entering the induction system. If the engine is naturally aspirated, an air temperature sensor is located in the air flow meter to work out the air mass. However, if the engine is turbocharged, an additional charge air temperature sensor is used to monitor the temperature of the inducted air after it has passed through the turbocharger and intercooler, in order to accurately and dynamically calculate the overall air mass. Main system characteristics Fuel delivery, ignition timing, and dwell angle incorporated into the same control unit. Crank position and engine speed is determined by a pair of sensors reading from the flywheel. Separate constant idle speed system monitors and re" https://en.wikipedia.org/wiki/Timeout%20%28computing%29,"In telecommunications and related engineering (including computer networking and programming), the term timeout or time-out has several meanings, including: A network parameter related to an enforced event designed to occur at the conclusion of a predetermined elapsed time. A specified period of time that will be allowed to elapse in a system before a specified event is to take place, unless another specified event occurs first; in either case, the period is terminated when either event takes place. Note: A timeout condition can be canceled by the receipt of an appropriate time-out cancellation signal. An event that occurs at the end of a predetermined period of time that began at the occurrence of another specified event. The timeout can be prevented by an appropriate signal. Timeouts allow for more efficient usage of limited resources without requiring additional interaction from the agent interested in the goods that cause the consumption of these resources. The basic idea is that in situations where a system must wait for something to happen, rather than waiting indefinitely, the waiting will be aborted after the timeout period has elapsed. This is based on the assumption that further waiting is useless, and some other action is necessary. Examples Specific examples include: In the Microsoft Windows and ReactOS command-line interfaces, the timeout command pauses the command processor for the specified number of seconds. In POP connections, the server will usually close a client connection after a certain period of inactivity (the timeout period). This ensures that connections do not persist forever, if the client crashes or the network goes down. Open connections consume resources, and may prevent other clients from accessing the same mailbox. In HTTP persistent connections, the web server saves opened connections (which consume CPU time and memory). The web client does not have to send an ""end of requests series"" signal. Connections are closed" https://en.wikipedia.org/wiki/Resource%20%28biology%29,"In biology and ecology, a resource is a substance or object in the environment required by an organism for normal growth, maintenance, and reproduction. Resources box can be consumed by one organism and, as a result, become unavailable to another organism. For plants key resources are light, nutrients, water, and place to grow. For animals key resources are food, water, and territory. Key resources for plants Terrestrial plants require particular resources for photosynthesis and to complete their life cycle of germination, growth, reproduction, and dispersal: Carbon dioxide Microsite (ecology) Nutrients Pollination Seed dispersal Soil Water Key resources for animals Animals require particular resources for metabolism and to complete their life cycle of gestation, birth, growth, and reproduction: Foraging Territory Water Resources and ecological processes Resource availability plays a central role in ecological processes: Carrying capacity Biological competition Liebig's law of the minimum Niche differentiation See also Abiotic component Biotic component Community ecology Ecology Population ecology Plant ecology size-asymmetric competition" https://en.wikipedia.org/wiki/Memory%20management%20controller%20%28Nintendo%29,"Multi-memory controllers or memory management controllers (MMC) are different kinds of special chips designed by various video game developers for use in Nintendo Entertainment System (NES) cartridges. These chips extend the capabilities of the original console and make it possible to create NES games with features the original console cannot offer alone. The basic NES hardware supports only 40KB of ROM total, up to 32KB PRG and 8KB CHR, thus only a single tile and sprite table are possible. This limit was rapidly reached within the Famicom's first two years on the market and game developers began requesting a way to expand the console's capabilities. In the emulation community these chips are also known as mappers. List of MMC chips CNROM Manufacturer: Nintendo Games: Gradius, Ghostbusters, Gyruss, Arkanoid CNROM is the earliest banking hardware introduced on the Famicom, appearing in early 1986. It consists of a single 7400 series discrete logic chip. CNROM supports a single fixed PRG bank and up to eight CHR banks for 96KB total ROM. Some third party variations supported additional capabilities. Many CNROM games store the game level data in the CHR ROM and blank the screen while reading it. UNROM Manufacturer: Nintendo Games: Pro Wrestling, Ikari Warriors, Mega Man, Contra, Castlevania Early NES mappers are composed of 7400 series discrete logic chips. UNROM appeared in late 1986. It supports a single fixed 16KB PRG bank, the rest of the PRG being switchable. Instead of a dedicated ROM chip to hold graphics data (called CHR by Nintendo), games using UNROM store graphics data on the program ROM and copy it to a RAM on the cartridge at run time. MMC1 Manufacturer: Nintendo Games: The Legend of Zelda, Mega Man 2, Metroid, Godzilla: Monster of Monsters, Teenage Mutant Ninja Turtles, and more. The MMC1 is Nintendo's first custom MMC integrated circuit to incorporate support for saved games and multi-directional scrolling configurations. The chip comes in" https://en.wikipedia.org/wiki/Biology%20by%20Team,"Biology by Team in German Biologie im Team - is the first Austrian biology contest for upper secondary schools. Students at upper secondary schools who are especially interested in biology can deepen their knowledge and broaden their competence in experimental biology within the framework of this contest. Each year, a team of teachers choose modules of key themes on which students work in the form of a voluntary exercise. The evaluation focuses in particular on the practical work, and, since the school year 2004/05, also on teamwork. In April, a two-day closing competition takes place, in which six groups of students from participating schools are given various problems to solve. A jury (persons from the science and corporate communities) evaluate the results and how they are presented. The concept was developed by a team of teachers in co-operation with the AHS (Academic Secondary Schools) - Department of the Pedagogical Institute in Carinthia. Since 2008 it is situated at the Science departement of the University College of Teacher Training Carinthia. The first contest in the school year 2002/03 took place under the motto: Hell is loose in the ground under us. Other themes included Beautiful but dangerous, www-worldwide water 1 and 2, Expedition forest, Relationship boxes, Mole's view, Biological timetravel, Biology at the University, Ecce Homo, Biodiversity, Death in tin cans, Sex sells, Without a trace, Biologists see more, Quo vadis biology? , Biology without limits?, Diversity instead of simplicity, Grid square, Diversity instead of simplicity 0.2, www-worldwide water 3.The theme for the year 2023/24 is I hear something you don't see. Till now the following schools were participating: BG/BRG Mössingerstraße Klagenfurt Ingeborg-Bachmann-Gymnasium, Klagenfurt BG/BRG St. Martinerstraße Villach BG/BRG Peraustraße Villach International school Carinthia, Velden Österreichisches Gymnasium Prag Europagymnasium Klagenfurt BRG Viktring Klagenfurt BORG Wo" https://en.wikipedia.org/wiki/Locus%20suicide%20recombination,Locus suicide recombination (LSR) constitutes a variant form of class switch recombination that eliminates all immunoglobulin heavy chain constant genes. It thus terminates immunoglobulin and B-cell receptor (BCR) expression in B-lymphocytes and results in B-cell death since survival of such cells requires BCR expression. This process is initiated by the enzyme activation-induced deaminase upon B-cell activation. LSR is thus one of the pathways that can result into activation-induced cell death in the B-cell lineage. https://en.wikipedia.org/wiki/List%20of%20common%20coordinate%20transformations,"This is a list of some of the most commonly used coordinate transformations. 2-dimensional Let be the standard Cartesian coordinates, and the standard polar coordinates. To Cartesian coordinates From polar coordinates From log-polar coordinates By using complex numbers , the transformation can be written as That is, it is given by the complex exponential function. From bipolar coordinates From 2-center bipolar coordinates From Cesàro equation To polar coordinates From Cartesian coordinates Note: solving for returns the resultant angle in the first quadrant (). To find one must refer to the original Cartesian coordinate, determine the quadrant in which lies (for example, (3,−3) [Cartesian] lies in QIV), then use the following to solve for The value for must be solved for in this manner because for all values of , is only defined for , and is periodic (with period ). This means that the inverse function will only give values in the domain of the function, but restricted to a single period. Hence, the range of the inverse function is only half a full circle. Note that one can also use From 2-center bipolar coordinates Where 2c is the distance between the poles. To log-polar coordinates from Cartesian coordinates Arc-length and curvature In Cartesian coordinates In polar coordinates 3-dimensional Let (x, y, z) be the standard Cartesian coordinates, and (ρ, θ, φ) the spherical coordinates, with θ the angle measured away from the +Z axis (as , see conventions in spherical coordinates). As φ has a range of 360° the same considerations as in polar (2 dimensional) coordinates apply whenever an arctangent of it is taken. θ has a range of 180°, running from 0° to 180°, and does not pose any problem when calculated from an arccosine, but beware for an arctangent. If, in the alternative definition, θ is chosen to run from −90° to +90°, in opposite direction of the earlier definition, it can be found uniquely from an arcsine, but beware of an arccota" https://en.wikipedia.org/wiki/Least-squares%20spectral%20analysis,"Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum based on a least-squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in the long and gapped records; LSSA mitigates such problems. Unlike in Fourier analysis, data need not be equally spaced to use LSSA. Developed in 1969 and 1971, LSSA is also known as the Vaníček method and the Gauss-Vaniček method after Petr Vaníček, and as the Lomb method or the Lomb–Scargle periodogram, based on the simplifications first by Nicholas R. Lomb and then by Jeffrey D. Scargle. Historical background The close connections between Fourier analysis, the periodogram, and the least-squares fitting of sinusoids have been known for a long time. However, most developments are restricted to complete data sets of equally spaced samples. In 1963, Freek J. M. Barning of Mathematisch Centrum, Amsterdam, handled unequally spaced data by similar techniques, including both a periodogram analysis equivalent to what nowadays is called the Lomb method and least-squares fitting of selected frequencies of sinusoids determined from such periodograms — and connected by a procedure known today as the matching pursuit with post-back fitting or the orthogonal matching pursuit. Petr Vaníček, a Canadian geophysicist and geodesist of the University of New Brunswick, proposed in 1969 also the matching-pursuit approach for equally and unequally spaced data, which he called ""successive spectral analysis"" and the result a ""least-squares periodogram"". He generalized this method to account for any systematic components beyond a simple mean, such as a ""predicted linear (quadratic, exponential, ...) secular trend of unknown magnitude"", and applied it to a variety of samples, in 1971. Vaníček's strictly least-squares method was then simplified in 1976 by Nicholas R. Lomb of the University of Sydney, who pointed out i" https://en.wikipedia.org/wiki/Perceptual%20control%20theory,"Perceptual control theory (PCT) is a model of behavior based on the properties of negative feedback control loops. A control loop maintains a sensed variable at or near a reference value by means of the effects of its outputs upon that variable, as mediated by physical properties of the environment. In engineering control theory, reference values are set by a user outside the system. An example is a thermostat. In a living organism, reference values for controlled perceptual variables are endogenously maintained. Biological homeostasis and reflexes are simple, low-level examples. The discovery of mathematical principles of control introduced a way to model a negative feedback loop closed through the environment (circular causation), which spawned perceptual control theory. It differs fundamentally from some models in behavioral and cognitive psychology that model stimuli as causes of behavior (linear causation). PCT research is published in experimental psychology, neuroscience, ethology, anthropology, linguistics, sociology, robotics, developmental psychology, organizational psychology and management, and a number of other fields. PCT has been applied to design and administration of educational systems, and has led to a psychotherapy called the method of levels. Principles and differences from other theories The perceptual control theory is deeply rooted in biological cybernetics, systems biology and control theory and the related concept of feedback loops. Unlike some models in behavioral and cognitive psychology it sets out from the concept of circular causality. It shares, therefore, its theoretical foundation with the concept of plant control, but it is distinct from it by emphasizing the control of the internal representation of the physical world. The plant control theory focuses on neuro-computational processes of movement generation, once a decision for generating the movement has been taken. PCT spotlights the embeddedness of agents in their environment" https://en.wikipedia.org/wiki/Hermite%20constant,"In mathematics, the Hermite constant, named after Charles Hermite, determines how long a shortest element of a lattice in Euclidean space can be. The constant γn for integers n > 0 is defined as follows. For a lattice L in Euclidean space Rn with unit covolume, i.e. vol(Rn/L) = 1, let λ1(L) denote the least length of a nonzero element of L. Then is the maximum of λ1(L) over all such lattices L. The square root in the definition of the Hermite constant is a matter of historical convention. Alternatively, the Hermite constant γn can be defined as the square of the maximal systole of a flat n-dimensional torus of unit volume. Example The Hermite constant is known in dimensions 1–8 and 24. For n = 2, one has γ2 = . This value is attained by the hexagonal lattice of the Eisenstein integers. Estimates It is known that A stronger estimate due to Hans Frederick Blichfeldt is where is the gamma function. See also Loewner's torus inequality" https://en.wikipedia.org/wiki/Unsaturated%20fat,"An unsaturated fat is a fat or fatty acid in which there is at least one double bond within the fatty acid chain. A fatty acid chain is monounsaturated if it contains one double bond, and polyunsaturated if it contains more than one double bond. A saturated fat has no carbon to carbon double bonds, so the maximum possible number of hydrogens bonded to the carbons, and is ""saturated"" with hydrogen atoms. To form carbon to carbon double bonds, hydrogen atoms are removed from the carbon chain. In cellular metabolism, unsaturated fat molecules contain less energy (i.e., fewer calories) than an equivalent amount of saturated fat. The greater the degree of unsaturation in a fatty acid (i.e., the more double bonds in the fatty acid) the more vulnerable it is to lipid peroxidation (rancidity). Antioxidants can protect unsaturated fat from lipid peroxidation. Composition of common fats In chemical analysis, fats are broken down to their constituent fatty acids, which can be analyzed in various ways. In one approach, fats undergo transesterification to give fatty acid methyl esters (FAMEs), which are amenable to separation and quantitation using by gas chromatography. Classically, unsaturated isomers were separated and identified by argentation thin-layer chromatography. The saturated fatty acid components are almost exclusively stearic (C18) and palmitic acids (C16). Monounsaturated fats are almost exclusively oleic acid. Linolenic acid comprises most of the triunsaturated fatty acid component. Chemistry and nutrition Although polyunsaturated fats are protective against cardiac arrhythmias, a study of post-menopausal women with a relatively low fat intake showed that polyunsaturated fat is positively associated with progression of coronary atherosclerosis, whereas monounsaturated fat is not. This probably is an indication of the greater vulnerability of polyunsaturated fats to lipid peroxidation, against which vitamin E has been shown to be protective. Examples " https://en.wikipedia.org/wiki/Ultra-processed%20food,"Ultra-processed food (UPF) is an industrially formulated edible substance derived from natural food or synthesized from other organic compounds. The resulting products are designed to be highly profitable, convenient, and hyperpalatable, often through food additives such as preservatives, colourings, and flavourings. The state of research into ultra-processed foods and their effects is evolving rapidly as of 2023. Epidemiological data suggest that consumption of ultra-processed foods is associated with higher risks of certain diseases, including obesity, type 2 diabetes, cardiovascular diseases, and certain types of cancer. Researchers also present ultra-processing as a facet of environmental degradation caused by the food industry. Definitions Concerns around food processing have existed since at least the Industrial Revolution. Many critics identified 'processed food' as problematic, and movements such as raw foodism attempted to eschew food processing entirely, but since even basic cookery results in processed food, this concept failed in itself to influence public policy surrounding the epidemiology of obesity. Michael Pollan's influential book The Omnivore's Dilemma referred to highly processed industrial food as 'edible food-like substances'. Carlos Augusto Monteiro cited Pollan as an influence in coining the term 'ultra-processed food' in a 2009 commentary. Monteiro's team developed the Nova classification for grouping unprocessed and processed foods beginning in 2010, whose definition of ultra-processing has become most widely accepted and has gradually become more refined through successive publications. The identification of ultra-processed foods, as well as the category itself, is a subject of debate among nutrition and public health scientists, and other definitions have been proposed. A survey of systems for classifying levels of food processing in 2021 identified four 'defining themes': Extent of change (from natural state); Nature of change (p" https://en.wikipedia.org/wiki/Floor%20and%20ceiling%20functions,"In mathematics and computer science, the floor function is the function that takes as input a real number , and gives as output the greatest integer less than or equal to , denoted or . Similarly, the ceiling function maps to the least integer greater than or equal to , denoted or . For example, for floor: , , and for ceiling: , and . Historically, the floor of has been–and still is–called the integral part or integer part of , often denoted (as well as a variety of other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For an integer, . Although and produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when =2.0001; . However, if =2, then , while . Notation The integral part or integer part of a number ( in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names ""floor"" and ""ceiling"" and the corresponding notations and . (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article. In some sources, boldface or double brackets are used for floor, and reversed brackets or for ceiling. The fractional part is the sawtooth function, denoted by for real and defined by the formula For all x, . These characters are provided in Unicode: In the LaTeX typesetting system, these symbols can be specified with the and commands in math mode, and extended in size using and as needed. Some authors define as the round-toward-zero function, so and , and call i" https://en.wikipedia.org/wiki/List%20of%20types%20of%20functions,"In mathematics, functions can be identified according to the properties they have. These properties describe the functions' behaviour under certain conditions. A parabola is a specific type of function. Relative to set theory These properties concern the domain, the codomain and the image of functions. Injective function: has a distinct value for each distinct input. Also called an injection or, sometimes, one-to-one function. In other words, every element of the function's codomain is the image of at most one element of its domain. Surjective function: has a preimage for every element of the codomain, that is, the codomain equals the image. Also called a surjection or onto function. Bijective function: is both an injection and a surjection, and thus invertible. Identity function: maps any given element to itself. Constant function: has a fixed value regardless of its input. Empty function: whose domain equals the empty set. Set function: whose input is a set. Choice function called also selector or uniformizing function: assigns to each set one of its elements. Relative to an operator (c.q. a group or other structure) These properties concern how the function is affected by arithmetic operations on its argument. The following are special examples of a homomorphism on a binary operation: Additive function: preserves the addition operation: f (x + y) = f (x) + f (y). Multiplicative function: preserves the multiplication operation: f (xy) = f (x)f (y). Relative to negation: Even function: is symmetric with respect to the Y-axis. Formally, for each x: f (x) = f (−x). Odd function: is symmetric with respect to the origin. Formally, for each x: f (−x) = −f (x). Relative to a binary operation and an order: Subadditive function: for which the value of f (x + y) is less than or equal to f (x) + f (y). Superadditive function: for which the value of f (x + y) i" https://en.wikipedia.org/wiki/T.C.%20Mits,"T.C. Mits (acronym for ""the celebrated man in the street""), is a term coined by Lillian Rosanoff Lieber to refer to an everyman. In Lieber's works, T.C. Mits was a character who made scientific topics more approachable to the public audience. The phrase has enjoyed sparse use by authors in fields such as molecular biology, secondary education, and general semantics. The Education of T.C. MITS Dr. Lillian Rosanoff Lieber wrote this treatise on mathematical thinking in twenty chapters. The writing took a form that resembled free-verse poetry, though Lieber included an introduction stating that the form was meant only to facilitate rapid reading, rather than emulate free-verse. Lieber's husband, a fellow professor at Long Island University, Hugh Gray Lieber, provided illustrations for the book. The title of the book was meant to emphasize that mathematics can be understood by anyone, which was further shown when a special ""Overseas edition for the Armed Forces"" was published in 1942, and approved by the Council on Books in Wartime to be sent to American troops fighting in World War II. See also John Doe The man on the Clapham omnibus" https://en.wikipedia.org/wiki/List%20of%20q-analogs,"This is a list of q-analogs in mathematics and related fields. Algebra Iwahori–Hecke algebra Quantum affine algebra Quantum enveloping algebra Quantum group Analysis Jackson integral q-derivative q-difference polynomial Quantum calculus Combinatorics LLT polynomial q-binomial coefficient q-Pochhammer symbol q-Vandermonde identity Orthogonal polynomials q-Bessel polynomials q-Charlier polynomials q-Hahn polynomials q-Jacobi polynomials: Big q-Jacobi polynomials Continuous q-Jacobi polynomials Little q-Jacobi polynomials q-Krawtchouk polynomials q-Laguerre polynomials q-Meixner polynomials q-Meixner–Pollaczek polynomials q-Racah polynomials Probability and statistics Gaussian q-distribution q-exponential distribution q-Weibull diribution Tsallis q-Gaussian Tsallis entropy Special functions Basic hypergeometric series Elliptic gamma function Hahn–Exton q-Bessel function Jackson q-Bessel function q-exponential q-gamma function q-theta function See also Lists of mathematics topics Q-analogs" https://en.wikipedia.org/wiki/Signal%20reconstruction,"In signal processing, reconstruction usually means the determination of an original continuous signal from a sequence of equally spaced samples. This article takes a generalized abstract mathematical approach to signal sampling and reconstruction. For a more practical approach based on band-limited signals, see Whittaker–Shannon interpolation formula. General principle Let F be any sampling method, i.e. a linear map from the Hilbert space of square-integrable functions to complex space . In our example, the vector space of sampled signals is n-dimensional complex space. Any proposed inverse R of F (reconstruction formula, in the lingo) would have to map to some subset of . We could choose this subset arbitrarily, but if we're going to want a reconstruction formula R that is also a linear map, then we have to choose an n-dimensional linear subspace of . This fact that the dimensions have to agree is related to the Nyquist–Shannon sampling theorem. The elementary linear algebra approach works here. Let (all entries zero, except for the kth entry, which is a one) or some other basis of . To define an inverse for F, simply choose, for each k, an so that . This uniquely defines the (pseudo-)inverse of F. Of course, one can choose some reconstruction formula first, then either compute some sampling algorithm from the reconstruction formula, or analyze the behavior of a given sampling algorithm with respect to the given formula. Ideally, the reconstruction formula is derived by minimizing the expected error variance. This requires that either the signal statistics is known or a prior probability for the signal can be specified. Information field theory is then an appropriate mathematical formalism to derive an optimal reconstruction formula. Popular reconstruction formulae Perhaps the most widely used reconstruction formula is as follows. Let be a basis of in the Hilbert space sense; for instance, one could use the eikonal , although other choices are" https://en.wikipedia.org/wiki/List%20of%20quasiparticles,This is a list of quasiparticles. https://en.wikipedia.org/wiki/Martingale%20difference%20sequence,"In probability theory, a martingale difference sequence (MDS) is related to the concept of the martingale. A stochastic series X is an MDS if its expectation with respect to the past is zero. Formally, consider an adapted sequence on a probability space . is an MDS if it satisfies the following two conditions: , and , for all . By construction, this implies that if is a martingale, then will be an MDS—hence the name. The MDS is an extremely useful construct in modern probability theory because it implies much milder restrictions on the memory of the sequence than independence, yet most limit theorems that hold for an independent sequence will also hold for an MDS. A special case of MDS, denoted as {Xt,t}0 is known as innovative sequence of Sn; where Sn and are corresponding to random walk and filtration of the random processes . In probability theory innovation series is used to emphasize the generality of Doob representation. In signal processing the innovation series is used to introduce Kalman filter. The main differences of innovation terminologies are in the applications. The later application aims to introduce the nuance of samples to the model by random sampling." https://en.wikipedia.org/wiki/Memory%20ordering,"Memory ordering describes the order of accesses to computer memory by a CPU. The term can refer either to the memory ordering generated by the compiler during compile time, or to the memory ordering generated by a CPU during runtime. In modern microprocessors, memory ordering characterizes the CPU's ability to reorder memory operations – it is a type of out-of-order execution. Memory reordering can be used to fully utilize the bus-bandwidth of different types of memory such as caches and memory banks. On most modern uniprocessors memory operations are not executed in the order specified by the program code. In single threaded programs all operations appear to have been executed in the order specified, with all out-of-order execution hidden to the programmer – however in multi-threaded environments (or when interfacing with other hardware via memory buses) this can lead to problems. To avoid problems, memory barriers can be used in these cases. Compile-time memory ordering Most programming languages have some notion of a thread of execution which executes statements in a defined order. Traditional compilers translate high-level expressions to a sequence of low-level instructions relative to a program counter at the underlying machine level. Execution effects are visible at two levels: within the program code at a high level, and at the machine level as viewed by other threads or processing elements in concurrent programming, or during debugging when using a hardware debugging aid with access to the machine state (some support for this is often built directly into the CPU or microcontroller as functionally independent circuitry apart from the execution core which continues to operate even when the core itself is halted for static inspection of its execution state). Compile-time memory order concerns itself with the former, and does not concern itself with these other views. General issues of program order Program-order effects of expression evaluation Durin" https://en.wikipedia.org/wiki/Free%20convolution,"Free convolution is the free probability analog of the classical notion of convolution of probability measures. Due to the non-commutative nature of free probability theory, one has to talk separately about additive and multiplicative free convolution, which arise from addition and multiplication of free random variables (see below; in the classical case, what would be the analog of free multiplicative convolution can be reduced to additive convolution by passing to logarithms of random variables). These operations have some interpretations in terms of empirical spectral measures of random matrices. The notion of free convolution was introduced by Dan-Virgil Voiculescu. Free additive convolution Let and be two probability measures on the real line, and assume that is a random variable in a non commutative probability space with law and is a random variable in the same non commutative probability space with law . Assume finally that and are freely independent. Then the free additive convolution is the law of . Random matrices interpretation: if and are some independent by Hermitian (resp. real symmetric) random matrices such that at least one of them is invariant, in law, under conjugation by any unitary (resp. orthogonal) matrix and such that the empirical spectral measures of and tend respectively to and as tends to infinity, then the empirical spectral measure of tends to . In many cases, it is possible to compute the probability measure explicitly by using complex-analytic techniques and the R-transform of the measures and . Rectangular free additive convolution The rectangular free additive convolution (with ratio ) has also been defined in the non commutative probability framework by Benaych-Georges and admits the following random matrices interpretation. For , for and are some independent by complex (resp. real) random matrices such that at least one of them is invariant, in law, under multiplication on the left and on the r" https://en.wikipedia.org/wiki/Biomarker,"In biomedical contexts, a biomarker, or biological marker, is a measurable indicator of some biological state or condition. Biomarkers are often measured and evaluated using blood, urine, or soft tissues to examine normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention. Biomarkers are used in many scientific fields. Medicine Biomarkers used in the medical field, are a part of a relatively new clinical toolset categorized by their clinical applications. The four main classes are molecular, physiologic, histologic and radiographic biomarkers. All four types of biomarkers have a clinical role in narrowing or guiding treatment decisions and follow a sub-categorization of being either predictive, prognostic, or diagnostic. Predictive Predictive molecular, cellular, or imaging biomarkers that pass validation can serve as a method of predicting clinical outcomes. Predictive biomarkers are used to help optimize ideal treatments, and often indicate the likelihood of benefiting from a specific therapy. For example, molecular biomarkers situated at the interface of pathology-specific molecular process architecture and drug mechanism of action promise capturing aspects allowing assessment of an individual treatment response. This offers a dual approach to both seeing trends in retrospective studies and using biomarkers to predict outcomes. For example, in metastatic colorectal cancer predictive biomarkers can serve as a way of evaluating and improving patient survival rates and in the individual case by case scenario, they can serve as a way of sparing patients from needless toxicity that arises from cancer treatment plans. Common examples of predictive biomarkers are genes such as ER, PR and HER2/neu in breast cancer, BCR-ABL fusion protein in chronic myeloid leukaemia, c-KIT mutations in GIST tumours and EGFR1 mutations in NSCLC. Diagnostic Diagnostic biomarkers that meet a burden of proof can serve a role in narrowi" https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin%20constant,"In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted , is a mathematical constant, related to the -function and the Barnes -function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin. Its approximate value is: = ...   . The Glaisher–Kinkelin constant can be given by the limit: where is the hyperfactorial. This formula displays a similarity between and which is perhaps best illustrated by noting Stirling's formula: which shows that just as is obtained from approximation of the factorials, can also be obtained from a similar approximation to the hyperfactorials. An equivalent definition for involving the Barnes -function, given by where is the gamma function is: . The Glaisher–Kinkelin constant also appears in evaluations of the derivatives of the Riemann zeta function, such as: where is the Euler–Mascheroni constant. The latter formula leads directly to the following product found by Glaisher: An alternative product formula, defined over the prime numbers, reads where denotes the th prime number. The following are some integrals that involve this constant: A series representation for this constant follows from a series for the Riemann zeta function given by Helmut Hasse." https://en.wikipedia.org/wiki/Ringing%20artifacts,"In signal processing, particularly digital image processing, ringing artifacts are artifacts that appear as spurious signals near sharp transitions in a signal. Visually, they appear as bands or ""ghosts"" near edges; audibly, they appear as ""echos"" near transients, particularly sounds from percussion instruments; most noticeable are the pre-echos. The term ""ringing"" is because the output signal oscillates at a fading rate around a sharp transition in the input, similar to a bell after being struck. As with other artifacts, their minimization is a criterion in filter design. Introduction The main cause of ringing artifacts is due to a signal being bandlimited (specifically, not having high frequencies) or passed through a low-pass filter; this is the frequency domain description. In terms of the time domain, the cause of this type of ringing is the ripples in the sinc function, which is the impulse response (time domain representation) of a perfect low-pass filter. Mathematically, this is called the Gibbs phenomenon. One may distinguish overshoot (and undershoot), which occurs when transitions are accentuated – the output is higher than the input – from ringing, where after an overshoot, the signal overcorrects and is now below the target value; these phenomena often occur together, and are thus often conflated and jointly referred to as ""ringing"". The term ""ringing"" is most often used for ripples in the time domain, though it is also sometimes used for frequency domain effects: windowing a filter in the time domain by a rectangular function causes ripples in the frequency domain for the same reason as a brick-wall low pass filter (rectangular function in the frequency domain) causes ripples in the time domain, in each case the Fourier transform of the rectangular function being the sinc function. There are related artifacts caused by other frequency domain effects, and similar artifacts due to unrelated causes. Causes Description By definition, ringing occu" https://en.wikipedia.org/wiki/Lego%20Mindstorms%20EV3,"LEGO Mindstorms EV3 (stylized: LEGO MINDSTORMS EV3) is the third generation robotics kit in LEGO's Mindstorms line. It is the successor to the second generation LEGO Mindstorms NXT kit. The ""EV"" designation refers to the ""evolution"" of the Mindstorms product line. ""3"" refers to the fact that it is the third generation of computer modules - first was the RCX and the second is the NXT. It was officially announced on January 4, 2013, and was released in stores on September 1, 2013. The education edition was released on August 1, 2013. There are many competitions using this set, including the FIRST LEGO League Challenge and the World Robot Olympiad, sponsored by LEGO. After an announcement in October 2022, The Lego Group officially discontinued Lego Mindstorms at the end of 2022. Overview The biggest change from the LEGO Mindstorms NXT and NXT 2.0 to the EV3 is the technological advances in the programmable brick. The main processor of the NXT was an ARM7 microcontroller, whereas the EV3 has a more powerful ARM9 CPU running Linux. A USB connector and Micro SD slot (up to 32GB) are new to the EV3. It comes with the plans to build 5 different robots: EV3RSTORM, GRIPP3R, R3PTAR, SPIK3R, and TRACK3R. LEGO has also released instructions online to build 12 additional projects: ROBODOZ3R, BANNER PRINT3R, EV3MEG, BOBB3E, MR-B3AM, RAC3 TRUCK, KRAZ3, EV3D4, EL3CTRIC GUITAR, DINOR3X, WACK3M, and EV3GAME. It uses a program called LEGO Mindstorms EV3 Home Edition, which is developed by LabVIEW, to write code using blocks instead of lines. However it can also be programmed on the actual robot and saved. MicroPython support has been recently added. The EV3 Home (31313) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 1 Touch Sensor, 1 Color Sensor, 1 Infrared Sensor, 1 Remote Control, cables, USB cable, and 585 TECHNIC elements. The Education EV3 Core Set (45544) set consists of: 1 EV3 programmable brick, 2 Large Motors, 1 Medium Motor, 2 Touch Sensors, " https://en.wikipedia.org/wiki/Mathematical%20fiction,"Mathematical fiction is a genre of creative fictional work in which mathematics and mathematicians play important roles. The form and the medium of the works are not important. The genre may include poems, short stories, novels or plays; comic books; films, videos, or audios. One of the earliest, and much studied, work of this genre is Flatland: A Romance of Many Dimensions, an 1884 satirical novella by the English schoolmaster Edwin Abbott Abbott. Mathematical fiction may have existed since ancient times, but it was recently rediscovered as a genre of literature; since then there has been a growing body of literature in this genre, and the genre has attracted a growing body of readers. For example, Abbot's Flatland spawned a sequel in the 21st century: a novel titled Flatterland, authored by Ian Stewart and published in 2001. A database of mathematical fiction Alex Kasman, a Professor of Mathematics at College of Charleston, who maintains a database of works that could possibly be included in this genre, has a broader definition for the genre: Any work ""containing mathematics or mathematicians"" has been treated as mathematical fiction. Accordingly, Gulliver's Travels by Jonathan Swift, War and Peace by Lev Tolstoy, Mrs. Warren's Profession by George Bernard Shaw, and several similar literary works appear in Kasman's database because these works contain references to mathematics or mathematicians, even though mathematics and mathematicians are not important in their plots. According to this broader approach, the oldest extant work of mathematical fiction is The Birds, a comedy by the Ancient Greek playwright Aristophanes performed in 414 BCE. Kasman's database has a list of more than one thousand items of diverse categories like literature, comic books and films. Some works of mathematical fiction The top ten results turned up by a search of the website of Mathematical Association of America using the keywords ""mathematical fiction"" contained references to the fol" https://en.wikipedia.org/wiki/Potentiostat,"A potentiostat is the electronic hardware required to control a three electrode cell and run most electroanalytical experiments. A Bipotentiostat and polypotentiostat are potentiostats capable of controlling two working electrodes and more than two working electrodes, respectively. The system functions by maintaining the potential of the working electrode at a constant level with respect to the reference electrode by adjusting the current at an auxiliary electrode. The heart of the different potentiostatic electronic circuits is an operational amplifier (op amp). It consists of an electric circuit which is usually described in terms of simple op amps. Primary use This equipment is fundamental to modern electrochemical studies using three electrode systems for investigations of reaction mechanisms related to redox chemistry and other chemical phenomena. The dimensions of the resulting data depend on the experiment. In voltammetry, electric current in amps is plotted against electric potential in voltage. In a bulk electrolysis total coulombs passed (total electric charge) is plotted against time in seconds even though the experiment measures electric current (amperes) over time. This is done to show that the experiment is approaching an expected number of coulombs. Most early potentiostats could function independently, providing data output through a physical data trace. Modern potentiostats are designed to interface with a personal computer and operate through a dedicated software package. The automated software allows the user rapidly to shift between experiments and experimental conditions. The computer allows data to be stored and analyzed more effectively, rapidly, and accurately than the earlier standalone devices. Basic relationships A potentiostat is a control and measuring device. It comprises an electric circuit which controls the potential across the cell by sensing changes in its resistance, varying accordingly the current supplied " https://en.wikipedia.org/wiki/List%20of%20mathematical%20physics%20journals,"This is a list of peer-reviewed scientific journals published in the field of Mathematical Physics. Advances in Theoretical and Mathematical Physics Annales Henri Poincaré Communications in Mathematical Physics International Journal of Geometric Methods in Modern Physics Journal of Geometry and Physics Journal of Mathematical Physics Journal of Nonlinear Mathematical Physics Journal of Physics A: Mathematical and Theoretical Journal of Statistical Physics Letters in Mathematical Physics Reports on Mathematical Physics Reviews in Mathematical Physics International Journal of Physics and Mathematics SIGMA (Symmetry, Integrability and Geometry: Methods and Applications) Teoreticheskaya i Matematicheskaya Fizika (Theoretical and Mathematical Physics), Steklov Mathematical Institute" https://en.wikipedia.org/wiki/Random%20pulse-width%20modulation,"Random pulse-width modulation (RPWM) is a modulation technique introduced for mitigating electromagnetic interference (EMI) of power converters by spreading the energy of the noise signal over a wider bandwidth, so that there are no significant peaks of the noise. This is achieved by randomly varying the main parameters of the pulse-width modulation signal. Description Electromagnetic interference (EMI) filters have been widely used for filtering out the conducted emissions generated by power converters since their advent. However, when size is of great concern like in aircraft and automobile applications, one of the practical solutions to suppress conducted emissions is to use random pulse-width modulation (RPWM). In conventional pulse-width modulation (PWM) schemes, the harmonics power is concentrated on the deterministic or known frequencies with a significant magnitude, which leads to mechanical vibration, noise, and EMI. However, by applying randomness to the conventional PWM scheme, the harmonic power will spread out so that no harmonic of significant magnitude exists, and peak harmonics at discrete frequency are significantly reduced. In RPWM, one of the switching parameters of the PWM signal, such as switching frequency, pulse position and duty cycle are varied randomly in order to spread the energy of the PWM signal. Hence, depending on the parameter which is made random, RPWM can be classified as random frequency modulation (RFM), random pulse-position modulation (RPPM) and random duty-cycle modulation (RDCM). The properties of RPWM can be investigated further by looking at the power spectral density (PSD). For conventional PWM, the PSD can be directly determined from the Fourier Series expansion of the PWM signal. However, the PSD of the RPWM signals can be described only by a probabilistic level using the theory of stochastic processes such as wide-sense stationary (WSS) random processes. RFM Among the different RPWM techniques, RFM (random frequen" https://en.wikipedia.org/wiki/Consolidated%20Tape%20Association,"The Consolidated Tape Association (CTA) oversees the Securities Information Processor that disseminates real-time trade and quote information (market data) in New York Stock Exchange (NYSE) and American Stock Exchange (AMEX) listed securities (stocks and bonds). It is currently chaired by Emily Kasparov of the Chicago Stock Exchange, the first woman and the youngest chair elected to the position. CTA manages two Plans to govern the collection, processing and dissemination of trade and quote data: the Consolidated Tape Plan, which governs trades, and the Consolidated Quotation Plan, which governs quotes. The Plans were filed with and approved by the Securities and Exchange Commission (SEC) in accordance with Section 11A of the Securities Exchange Act of 1934. Since the late 1970s, all SEC-registered exchanges and market centers that trade NYSE or AMEX-listed securities send their trades and quotes to a central consolidator where the Consolidated Tape System (CTS) and Consolidated Quotation System (CQS) data streams are produced and distributed worldwide. The CTA is the operating authority for CQS and CTS. Participant exchanges The current Participants include: Cboe BZX Exchange (BZX) Cboe BYX Exchange (BYX) Cboe EDGX Exchange (EDGX) Cboe EDGA Exchange (EDGA) Financial Industry Regulatory Authority (FINRA) Nasdaq ISE (ISE) Nasdaq OMX BX (BSE) Nasdaq OMX PHLX (PHLX) Nasdaq Stock Market (NASDAQ) New York Stock Exchange (NYSE) NYSE Arca (ARCA) NYSE American (AMEX) NYSE Chicago (CHX) NYSE National (NSX) Acquisition and distribution of market data The New York Stock Exchange is the Administrator of Network A, which includes NYSE-listed securities, and the American Stock Exchange is the Administrator of Network B, which includes AMEX-listed securities. CTS and CQS receive trade and quote information, respectively from NYSE, AMEX, and the other regional market centers using a standard message format. Each system validates its respective message format, ve" https://en.wikipedia.org/wiki/Cyclomorphosis,"Cyclomorphosis (also known as seasonal polyphenism) is the name given to the occurrence of cyclic or seasonal changes in the phenotype of an organism through successive generations. In species undergoing cyclomorphosis, physiological characteristics and development cycles of individuals being born depend on the time of the year at which they are conceived. It occurs in small aquatic invertebrates that reproduce by parthenogenesis and give rise to several generations annually. It occurs especially in marine planktonic animals, and is thought to be caused by the epigenetic effect of environmental cues on the organism, thereby altering the course of their development." https://en.wikipedia.org/wiki/Corelis,"Corelis, Inc, a subsidiary of Electronic Warfare Associates, is a private American company categorized under Electronic Equipment & Supplies and based in Cerritos, California. History Corelis was incorporated in 1991 and initially provided engineering services primarily to the aerospace and defense industries. Corelis introduced their first JTAG boundary scan products in 1998. In 2006, Electronic Warfare Associates, Inc. (EWA) a global provider of technology and engineering services to the aerospace, defense and commercial industries, announced their acquisition of Corelis, Inc. In 2008, the appointment of George B. La Fever as Corelis President and CEO finalized the transition of Corelis, Inc. into EWA Technologies, Inc., a wholly owned subsidiary of the EWA corporate family of high technology companies. In May 2018, David Mason was appointed as Corelis President and CEO. Products Corelis offers two distinct types of products and services: Standard Products (Boundary Scan Test Systems and Development Tools); and Custom Test Systems and System Integration. Boundary Scan Corelis introduced their first (JTAG boundary scan products in 1998. Corelis offers boundary scan/JTAG software and hardware products. Its ScanExpress boundary scan systems are used for structural testing as well as JTAG functional emulation test and in-system programming of Flash memory, CPLDs, and FPGAs. a In 2007, Corelis released ScanExpress JET, a test tool that combines boundary scan and functional test (FCT) technologies for test coverage. Test Systems and Integration Systems are available for design and debugging, manufacturing test, and field service and support. A variety of system options are available including desktop solutions as well as portable solutions for use in the field with laptops. Corelis also provides engineering services, training, and customer support. Projects Between 1991 and 1998, Corelis offered engineering services and licensed HP technologies. Co" https://en.wikipedia.org/wiki/Cyber%20manufacturing,"Cyber manufacturing is a concept derived from cyber-physical systems (CPS) that refers to a modern manufacturing system that offers an information-transparent environment to facilitate asset management, provide reconfigurability, and maintain productivity. Compared with conventional experience-based management systems, cyber manufacturing provides an evidence-based environment to keep equipment users aware of networked asset status, and transfer raw data into possible risks and actionable information. Driving technologies include design of cyber-physical systems, combination of engineering domain knowledge and computer sciences, as well as information technologies. Among them, mobile applications for manufacturing is an area of specific interest to industries and academia. Motivation The idea of cyber manufacturing originates from the fact that Internet-enabled services have added business value in economic sectors such as retail, music, consumer products, transportation, and healthcare; however, compared to existing Internet-enabled sectors, manufacturing assets are less connected and less accessible in real-time. Besides, current manufacturing enterprises make decisions following a top-down approach: from overall equipment effectiveness to assignment of production requirements, without considering the condition of machines. This usually leads to inconsistency in operation management due to lack of linkage between factories, possible overstock in spare part inventory, as well as unexpected machine downtime. Such situation calls for connectivity between machines as a foundation, and analytics on top of that as a necessity to translate raw data into information that actually facilitates user decision making. Expected functionalities of cyber manufacturing systems include machine connectivity and data acquisition, machine health prognostics, fleet-based asset management, and manufacturing reconfigurability. Technology Several technologies are involved in developing" https://en.wikipedia.org/wiki/Woeseian%20revolution,"The Woeseian revolution was the progression of the phylogenetic tree of life concept from two main divisions, known as the Prokarya and Eukarya, into three domains now classified as Bacteria, Archaea, and Eukaryotes. The discovery of the new domain stemmed from the work of biophysicist Carl Woese in 1977 from a principle of evolutionary biology designated as Woese's dogma. It states that the evolution of ribosomal RNA (rRNA) was a necessary precursor to the evolution of modern life forms. Although the three-domain system has been widely accepted, the initial introduction of Woese’s discovery received criticism from the scientific community. Phylogenetic implications The basis of phylogenetics was limited by the technology of the time, which led to a greater dependence on phenotypic classification before advances that would allow for molecular organization methods. This was a major reason why the dichotomy of all living things, being either animal or plant in nature, was deemed an acceptable theory. Without truly understanding the genetic implication of each organismal classification in phylogenies via nucleic acid sequencing of shared molecular material, the phylogenetic tree of life and other such phylogenies would no doubt be incorrect. Woese’s advances in molecular sequencing and phylogenetic organization allowed for a better understanding of the three domains of life - the Bacteria, Archaea, and Eukaryotes. Regarding their varying types of shared rRNA, the small subunit rRNA was deemed as the best molecule to sequence to distinguish phylogenetic relationships because of its relatively small size, ease of isolation, and universal distribution. Controversy This reorganization caused an initial pushback: it wasn't accepted until nearly a decade after its publication. Possible factors that led to initial criticisms of his discovery included Woese's oligonucleotide cataloging, of which he was one of ""only two or three people in the world"" to be able to execute th" https://en.wikipedia.org/wiki/List%20of%20second%20moments%20of%20area,"The following is a list of second moments of area of some shapes. The second moment of area, also known as area moment of inertia, is a geometrical property of an area which reflects how its points are distributed with respect to an arbitrary axis. The unit of dimension of the second moment of area is length to fourth power, L4, and should not be confused with the mass moment of inertia. If the piece is thin, however, the mass moment of inertia equals the area density times the area moment of inertia. Second moments of area Please note that for the second moment of area equations in the below table: and Parallel axis theorem The parallel axis theorem can be used to determine the second moment of area of a rigid body about any axis, given the body's second moment of area about a parallel axis through the body's centroid, the area of the cross section, and the perpendicular distance (d) between the axes. See also List of moments of inertia List of centroids Second polar moment of area" https://en.wikipedia.org/wiki/Rapid%20prototyping,"Rapid prototyping is a group of techniques used to quickly fabricate a scale model of a physical part or assembly using three-dimensional computer aided design (CAD) data. Construction of the part or assembly is usually done using 3D printing or ""additive layer manufacturing"" technology. The first methods for rapid prototyping became available in mid 1987 and were used to produce models and prototype parts. Today, they are used for a wide range of applications and are used to manufacture production-quality parts in relatively small numbers if desired without the typical unfavorable short-run economics. This economy has encouraged online service bureaus. Historical surveys of RP technology start with discussions of simulacra production techniques used by 19th-century sculptors. Some modern sculptors use the progeny technology to produce exhibitions and various objects. The ability to reproduce designs from a dataset has given rise to issues of rights, as it is now possible to interpolate volumetric data from 2D images. As with CNC subtractive methods, the computer-aided-design – computer-aided manufacturing CAD -CAM workflow in the traditional rapid prototyping process starts with the creation of geometric data, either as a 3D solid using a CAD workstation, or 2D slices using a scanning device. For rapid prototyping this data must represent a valid geometric model; namely, one whose boundary surfaces enclose a finite volume, contain no holes exposing the interior, and do not fold back on themselves. In other words, the object must have an ""inside"". The model is valid if for each point in 3D space the computer can determine uniquely whether that point lies inside, on, or outside the boundary surface of the model. CAD post-processors will approximate the application vendors' internal CAD geometric forms (e.g., B-splines) with a simplified mathematical form, which in turn is expressed in a specified data format which is a common feature in additive manufacturing: STL " https://en.wikipedia.org/wiki/Home%20network,"A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment. Origin IPv4 address exhaustion has forced most Internet service providers to grant only a single WAN-facing IP address for each residential account. Multiple devices within a residence or small office are provisioned with internet access by establishing a local area network (LAN) for the local devices with IP addresses reservied for private networks. A network router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN by network address translation. Infrastructure devices Certain devices on a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices home-dwellers more directly interact with. Unlike their data center counterparts, these ""networking"" devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible: A gateway establishes physical and data link layer connectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain a cable, DSL, or optical modem bound to a network interface controller for Ethernet. Routers are often incorporated into these devices for additional convenience. A router establishes network layer connectivity between a WAN and the home network. It also performs the key function of network address translation that allows independently add" https://en.wikipedia.org/wiki/Mead%E2%80%93Conway%20VLSI%20chip%20design%20revolution,"The Mead–Conway VLSI chip design revolution, or Mead and Conway revolution, was a very-large-scale integration (VLSI) design revolution starting in 1978 which resulted in a worldwide restructuring of academic materials in computer science and electrical engineering education, and was paramount for the development of industries based on the application of microelectronics. A prominent factor in promoting this design revolution throughout industry was the DARPA-funded VLSI Project instigated by Mead and Conway which spurred development of electronic design automation. Details When the integrated circuit was originally invented and commercialized, the initial chip designers were co-located with the physicists, engineers and factories that understood integrated circuit technology. At that time, fewer than 100 transistors would fit in an integrated circuit ""chip"". The design capability for such circuits was centered in industry, with universities struggling to catch up. Soon, the number of transistors which fit in a chip started doubling every year. (The doubling period later grew to two years.) Much more complex circuits could then fit on a single chip, but the device physicists who fabricated the chips were not experts in electronic circuit design, so their designs were limited more by their expertise and imaginations than by limitations in the technology. In 1978–79, when approximately 20,000 transistors could be fabricated in a single chip, Carver Mead and Lynn Conway wrote the textbook Introduction to VLSI Systems. It was published in 1979 and became a bestseller, since it was the first VLSI (Very Large Scale Integration) design textbook usable by non-physicists. (""In a self-aligned CMOS process, a transistor is formed wherever the gate layer ... crosses a diffusion layer."" from: Integrated circuit § Manufacturing) The authors intended the book to fill a gap in the literature and introduce electrical engineering and computer science students to integrated s" https://en.wikipedia.org/wiki/Prosection,"A prosection is the dissection of a cadaver (human or animal) or part of a cadaver by an experienced anatomist in order to demonstrate for students anatomic structure. In a dissection, students learn by doing; in a prosection, students learn by either observing a dissection being performed by an experienced anatomist or examining a specimen that has already been dissected by an experienced anatomist (etymology: Latin pro- ""before"" + sectio ""a cutting"") A prosection may also refer to the dissected cadaver or cadaver part which is then reassembled and provided to students for review. Use of prosections in medicine Prosections are used primarily in the teaching of anatomy in disciplines as varied as human medicine, chiropractic, veterinary medicine, and physical therapy. Prosections may also be used to teach surgical techniques (such as the suturing of skin), pathology, physiology, reproduction medicine and theriogenology, and other topics. The use of the prosection teaching technique is somewhat controversial in medicine. In the teaching of veterinary medicine, the goal is to ""create the best quality education ... while ensuring that animals are not used harmfully and that respect for animal life is engendered within the student."" Others have concluded that dissections and prosections have a negative impact on students' respect for patients and human life. Some scholars argue that while actual hands-on experience is essential, alternatives such as plastinated or freeze-dried cadavers are just as effective in the teaching of anatomy while dramatically reducing the number of cadavers or cadaver parts needed. Other alternatives such as instructional videos, plastic models, and printed materials also exist. Some studies find them equally effective as dissection or prosections, and some schools of human medicine in the UK have abandoned the use of cadavers entirely. But others question the usefulness of these alternatives, arguing dissection or prosection of cadavers ar" https://en.wikipedia.org/wiki/Norator,"In electronics, a norator is a theoretical linear, time-invariant one-port which can have an arbitrary current and voltage between its terminals. A norator represents a controlled voltage or current source with infinite gain. Inserting a norator in a circuit schematic provides whatever current and voltage the outside circuit demands, in particular, the demands of Kirchhoff's circuit laws. For example, the output of an ideal opamp behaves as a norator, producing nonzero output voltage and current that meet circuit requirements despite a zero input. A norator is often paired with a nullator to form a nullor. Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage)." https://en.wikipedia.org/wiki/Access%20level,"In computer science and computer programming, access level denotes the set of permissions or restrictions provided to a data type. Reducing access level is an effective method for limiting failure modes, reducing debugging time, and simplifying overall system complexity. It restricts variable modification to only the methods defined within the interface to the class. Thus, it is incorporated into many fundamental software design patterns. In general, a given object cannot be created, read, updated or deleted by any function without having a sufficient access level. The two most common access levels are public and private, which denote, respectively; permission across the entire program scope, or permission only within the corresponding class. A third, protected, extends permissions to all subclasses of the corresponding class. Access levels modifiers are commonly used in Java as well as C#, which further provides the internal level. In C++, the only difference between a struct and a class is the default access level, which is private for classes and public for structs. To illustrate the benefit: consider a public variable which can be accessed from any part of a program. If an error occurs, the culprit could be within any portion of the program, including various sub-dependencies. In a large code base, this leads to thousands of potential sources. Alternatively, consider a private variable. Due to access restrictions, all modifications to its value must occur via functions defined within the class. Therefore, the error is structurally contained within the class. There is often only a single source file for each class, which means debugging only requires evaluation of a single file. With sufficient modularity and minimal access level, large code bases can avoid many challenges associated with complexity. Example: Bank Balance Class Retrieved from Java Coffee Break Q&A public class bank_balance { public String owner; private int balance; public bank_b" https://en.wikipedia.org/wiki/Asymptotic%20safety%20in%20quantum%20gravity,"Asymptotic safety (sometimes also referred to as nonperturbative renormalizability) is a concept in quantum field theory which aims at finding a consistent and predictive quantum theory of the gravitational field. Its key ingredient is a nontrivial fixed point of the theory's renormalization group flow which controls the behavior of the coupling constants in the ultraviolet (UV) regime and renders physical quantities safe from divergences. Although originally proposed by Steven Weinberg to find a theory of quantum gravity, the idea of a nontrivial fixed point providing a possible UV completion can be applied also to other field theories, in particular to perturbatively nonrenormalizable ones. In this respect, it is similar to quantum triviality. The essence of asymptotic safety is the observation that nontrivial renormalization group fixed points can be used to generalize the procedure of perturbative renormalization. In an asymptotically safe theory the couplings do not need to be small or tend to zero in the high energy limit but rather tend to finite values: they approach a nontrivial UV fixed point. The running of the coupling constants, i.e. their scale dependence described by the renormalization group (RG), is thus special in its UV limit in the sense that all their dimensionless combinations remain finite. This suffices to avoid unphysical divergences, e.g. in scattering amplitudes. The requirement of a UV fixed point restricts the form of the bare action and the values of the bare coupling constants, which become predictions of the asymptotic safety program rather than inputs. As for gravity, the standard procedure of perturbative renormalization fails since Newton's constant, the relevant expansion parameter, has negative mass dimension rendering general relativity perturbatively nonrenormalizable. This has driven the search for nonperturbative frameworks describing quantum gravity, including asymptotic safety which in contrast to other approaches is cha" https://en.wikipedia.org/wiki/6174,"The number 6174 is known as Kaprekar's constant after the Indian mathematician D. R. Kaprekar. This number is renowned for the following rule: Take any four-digit number, using at least two different digits (leading zeros are allowed). Arrange the digits in descending and then in ascending order to get two four-digit numbers, adding leading zeros if necessary. Subtract the smaller number from the bigger number. Go back to step 2 and repeat. The above process, known as Kaprekar's routine, will always reach its fixed point, 6174, in at most 7 iterations. Once 6174 is reached, the process will continue yielding 7641 – 1467 = 6174. For example, choose 1459: 9541 – 1459 = 8082 8820 – 0288 = 8532 8532 – 2358 = 6174 7641 – 1467 = 6174 The only four-digit numbers for which Kaprekar's routine does not reach 6174 are repdigits such as 1111, which give the result 0000 after a single iteration. All other four-digit numbers eventually reach 6174 if leading zeros are used to keep the number of digits at 4. For numbers with three identical numbers and a fourth number that is one number higher or lower (such as 2111), it is essential to treat 3-digit numbers with a leading zero; for example: 2111 – 1112 = 0999; 9990 – 999 = 8991; 9981 – 1899 = 8082; 8820 – 288 = 8532; 8532 – 2358 = 6174. Other ""Kaprekar's constants"" There can be analogous fixed points for digit lengths other than four; for instance, if we use 3-digit numbers, then most sequences (i.e., other than repdigits such as 111) will terminate in the value 495 in at most 6 iterations. Sometimes these numbers (495, 6174, and their counterparts in other digit lengths or in bases other than 10) are called ""Kaprekar constants"". Other properties 6174 is a 7-smooth number, i.e. none of its prime factors are greater than 7. 6174 can be written as the sum of the first three degrees of 18: 18 + 18 + 18 = 5832 + 324 + 18 = 6174, and coincidentally, 6 + 1 + 7 + 4 = 18. The sum of squares of the prime factors of 6174 is a" https://en.wikipedia.org/wiki/Steam%20infusion,"Steam Infusion is a direct-contact heating process in which steam condenses on the surface of a pumpable food product. Its primary use is for the gentle and rapid heating of a variety of food ingredients and products including milk, cream, soymilk, ketchup, soups and sauces. Unlike steam injection and traditional vesselled steam heating; the steam infusion process surrounds the liquid food product with steam as opposed to passing steam through the liquid. Steam Infusion allows food product to be cooked, mixed and pumped within a single unit, often removing the need for multiple stages of processing. History Steam infusion was first used in pasteurization and has since been developed for further liquid heating applications. First generation In the 1960s APV PLC launched the first steam infusion system under the Palarisator brand name. This involves a 2-stage process for steam infusion whereby the liquid is cascaded into a large pressurized steam chamber and is sterilized when falling as film or droplets through the chamber. The liquid is then condensed at the chilled bottom of the chamber. Illustrated in the image on the right hand side of the page. Second generation The Steam Infusion process was first developed in 2000 by Pursuit Dynamics PLC as a method for marine propulsion. The process has since been developed to be used for applications in brewing, food and beverages, public health and safety, bioenergy, industrial licensing, and waste treatment worldwide. On the right a diagram shows how the process creates an environment of vaporised product surrounded by high energy steam. The supersonic steam flow entrains and vaporises the process flow to form a multiphase flow, which heats the suspended particles by surface conduction and condensation. The condensation of the steam causes the process flow to return to a liquid state. This causes rapid and uniform heating over the unit making it applicable to industrial cooking processes. This process has been use" https://en.wikipedia.org/wiki/Out-of-band%20agreement,"In the exchange of information over a communication channel, an out-of-band agreement is an agreement or understanding between the communicating parties that is not included in any message sent over the channel but which is relevant for the interpretation of such messages. By extension, in a client–server or provider-requester setting, an out-of-band agreement is an agreement or understanding that governs the semantics of the request/response interface but which is not part of the formal or contractual description of the interface specification itself. See also API Contract Out-of-band Off-balance-sheet External links SakaiProject definition Computer networking" https://en.wikipedia.org/wiki/Knowledge-based%20processor,"Knowledge-based processors (KBPs) are used for processing packets in computer networks. Knowledge-based processors are designed with the goal of increased performance of the IPv6 network. By contributing to the buildout of the IPv6 network, KBPs provide the means to an improved and secure networking system. Standards All networks are required to perform the following functions: IPv4/IPv6 multilayer packet/flow classification Policy-based routing and Policy enforcement (QoS) Longest Prefix Match (CIDR) Differentiated Services (DiffServ) IP Security (IPSec) Server Load Balancing Transaction verification All of the above functions must occur at high speeds in advanced networks. Knowledge-based processors contain embedded databases that store information required to process packets that travel through a network at wired speeds. Knowledge based processors are a new addition to intelligent networking that allow these functions to occur at high speeds and at the same time provide for lower power consumption. Knowledge-based processors currently target the 3rd layer of the 7 layer OSI model which is devoted to packet processing. Advantages The advantages that knowledge based processors offer are the ability to execute multiple simultaneous decision making processes for a range of network-aware processing functions. These include routing, Quality of Service (QOS), access control for both security and billing, as well as the forwarding of voice/video packets. These functions improve the performance of advanced Internet applications in IPv6 networks such as VOD (Video on demand), VoIP (voice over Internet protocol), and streaming of video and audio. Knowledge-based processors use a variety of techniques to improve network functioning such as parallel processing, deep pipelining and advanced power management techniques. Improvements in each of these areas allows for existing components to carry on their functions at wired speeds more efficiently thus improving" https://en.wikipedia.org/wiki/Fully%20differential%20amplifier,"A fully differential amplifier (FDA) is a DC-coupled high-gain electronic voltage amplifier with differential inputs and differential outputs. In its ordinary usage, the output of the FDA is controlled by two feedback paths which, because of the amplifier's high gain, almost completely determine the output voltage for any given input. In a fully differential amplifier, common-mode noise such as power supply disturbances is rejected; this makes FDAs especially useful as part of a mixed-signal integrated circuit. An FDA is often used to convert an analog signal into a form more suitable for driving into an analog-to-digital converter; many modern high-precision ADCs have differential inputs. The ideal FDA For any input voltages, the ideal FDA has infinite open-loop gain, infinite bandwidth, infinite input impedances resulting in zero input currents, infinite slew rate, zero output impedance and zero noise. In the ideal FDA, the difference in the output voltages is equal to the difference between the input voltages multiplied by the gain. The common mode voltage of the output voltages is not dependent on the input voltage. In many cases, the common mode voltage can be directly set by a third voltage input. Input voltage: Output voltage: Output common-mode voltage: A real FDA can only approximate this ideal, and the actual parameters are subject to drift over time and with changes in temperature, input conditions, etc. Modern integrated FET or MOSFET FDAs approximate more closely to these ideals than bipolar ICs where large signals must be handled at room temperature over a limited bandwidth; input impedance, in particular, is much higher, although the bipolar FDA usually exhibit superior (i.e., lower) input offset drift and noise characteristics. Where the limitations of real devices can be ignored, an FDA can be viewed as a Black Box with gain; circuit function and parameters are determined by feedback, usually negative. An FDA, as implemented in practic" https://en.wikipedia.org/wiki/Maze%20runner,"In electronic design automation, maze runner is a connection routing method that represents the entire routing space as a grid. Parts of this grid are blocked by components, specialised areas, or already present wiring. The grid size corresponds to the wiring pitch of the area. The goal is to find a chain of grid cells that go from point A to point B. A maze runner may use the Lee algorithm. It uses a wave propagation style (a wave are all cells that can be reached in n steps) throughout the routing space. The wave stops when the target is reached, and the path is determined by backtracking through the cells. See also Autorouter" https://en.wikipedia.org/wiki/MCU%208051%20IDE,"MCU 8051 IDE is a free software integrated development environment for microcontrollers based on the 8051. MCU 8051 IDE has a built-in simulator not only for the MCU itself, but also LCD displays and simple LED outputs as well as button inputs. It supports two programming languages: C (using SDCC) and assembly and runs on both Windows and Unix-based operating systems, such as FreeBSD and Linux. Features MCU simulator with many debugging features: register status, step by step, interrupt viewer, external memory viewer, code memory viewer, etc. Simulator for certain electronic peripherals like LEDs, LED displays, LED matrices, LCD displays, etc. Support for C language Native macro-assembler Support for ASEM-51 and other assemblers Advanced text editor with syntax highlighting and validation Support for vim and nano embedded in the IDE Simple hardware programmer for certain AT89Sxx MCUs Scientific calculator: time delay calculation and code generation, base converter, etc. Hexadecimal editor Supported MCUs The current version 1.4 supports many microcontrollers including: * 8051 * 80C51 * 8052 * AT89C2051 * AT89C4051 * AT89C51 * AT89C51RC * AT89C52 * AT89C55WD * AT89LV51 * AT89LV52 * AT89LV55 * AT89S52 * AT89LS51 * AT89LS52 * AT89S8253 * AT89S2051 * AT89S4051 * T87C5101 * T83C5101 * T83C5102 * TS80C32X2 * TS80C52X2 * TS87C52X2 * AT80C32X2 * AT80C52X2 * AT87C52X2 * AT80C54X2 * AT80C58X2 * AT87C54X2 * AT87C58X2 * TS80C54X2 * TS80C58X2 * TS87C54X2 * TS87C58X2 * TS80C31X2 * AT80C31X2 * 8031 * 8751 * 8032 * 8752 * 80C31 * 87C51 * 80C52 * 87C52 * 80C32 * 80C54 * 87C54 * 80C58 * 87C58 See also 8051 information Assembly language C language External links Paul's 8051 Tools, Projects and Free Code ASEM-51 SDCC Free integrated development environments Embedded systems" https://en.wikipedia.org/wiki/Ambiguity,"Ambiguity is the type of meaning in which a phrase, statement, or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement whose intended meaning cannot be definitively resolved, according to a rule or process with a finite number of steps. (The prefix ambi- reflects the idea of ""two,"" as in ""two meanings."") The concept of ambiguity is generally contrasted with vagueness. In ambiguity, specific and distinct interpretations are permitted (although some may not be immediately obvious), whereas with vague information it is difficult to form any interpretation at the desired level of specificity. Linguistic forms Lexical ambiguity is contrasted with semantic ambiguity. The former represents a choice between a finite number of known and meaningful context-dependent interpretations. The latter represents a choice between any number of possible interpretations, none of which may have a standard agreed-upon meaning. This form of ambiguity is closely related to vagueness. Ambiguity in human language is argued to reflect principles of efficient communication. Languages that communicate efficiently will avoid sending information that is redundant with information provided in the context. This can be shown mathematically to result in a system which is ambiguous when context is neglected. In this way, ambiguity is viewed as a generally useful feature of a linguistic system. Linguistic ambiguity can be a problem in law, because the interpretation of written documents and oral agreements is often of paramount importance. Lexical ambiguity The lexical ambiguity of a word or phrase applies to it having more than one meaning in the language to which the word belongs. ""Meaning"" here refers to whatever should be represented by a good dictionary. For instance, the word ""bank"" has several distinct lexical definitions, including ""financial institution"" and ""edge of" https://en.wikipedia.org/wiki/List%20of%20centroids,"The following is a list of centroids of various two-dimensional and three-dimensional objects. The centroid of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. Informally, it is the ""average"" of all points of . For an object of uniform composition, the centroid of a body is also its center of mass. In the case of two-dimensional objects shown below, the hyperplanes are simply lines. 2-D Centroids For each two-dimensional shape below, the area and the centroid coordinates are given: Where the centroid coordinates are marked as zero, the coordinates are at the origin, and the equations to get those points are the lengths of the included axes divided by two, in order to reach the center which in these cases are the origin and thus zero. 3-D Centroids For each three-dimensional body below, the volume and the centroid coordinates are given: See also List of moments of inertia List of second moments of area" https://en.wikipedia.org/wiki/Runt,"In a group of animals (usually a litter of animals born in multiple births), a runt is a member which is significantly smaller or weaker than the others. Owing to its small size, a runt in a litter faces obvious disadvantage, including difficulties in competing with its siblings for survival and possible rejection by its mother. Therefore, in the wild, a runt is less likely to survive infancy. Even among domestic animals, runts often face rejection. They may be placed under the direct care of an experienced animal breeder, although the animal's size and weakness coupled with the lack of natural parental care make this difficult. Some tamed animals are the result of reared runts. Not all litters have runts. All animals in a litter will naturally vary slightly in size and weight, but the smallest is not considered a ""runt"" if it is healthy and close in weight to its littermates. It may be perfectly capable of competing with its siblings for nutrition and other resources. A runt is specifically an animal that suffered in utero from deprivation of nutrients by comparison to its siblings, or from a genetic defect, and thus is born underdeveloped or less fit than expected. In popular culture Literature Wilbur, the pig from Charlotte's Web, is the runt of his litter. Orson, the pig in Jim Davis' U.S. Acres, is a runt who was bullied by his normal siblings. The strip changed direction when he was moved to a different farm and settled in with a supporting cast of oddball animals. Shade the bat from Silverwing is a runt. Fiver and Pipkin from Watership Down are runts, and their names in the Lapine language, Hrairoo and Hlao-roo, reflect this fact (the suffix -roo means ""Small"" or ""undersized""). Clifford the Big Red Dog was born a runt, but inexplicably began to grow explosively until he became 25 feet tall. Cadpig, a female Dalmatian puppy in Dodie Smith's children's novels The Hundred and One Dalmatians and The Starlight Barking, is the runt of her litter and is t" https://en.wikipedia.org/wiki/Configurable%20mixed-signal%20IC,"Configurable Mixed-signal IC (abbreviated as CMIC) is a category of ICs comprising a matrix of analog and digital blocks which are configurable through programmable (OTP) non-volatile memory. The technology, in combination with its design software and development kits, allows immediate prototyping of custom mixed-signal circuits, as well as the integration of multiple discrete components into a single IC to reduce PCB cost, size and assembly issues. See also Field-programmable analog array Programmable system-on-chip" https://en.wikipedia.org/wiki/Die%20%28integrated%20circuit%29,"A die, in the context of integrated circuits, is a small block of semiconducting material on which a given functional circuit is fabricated. Typically, integrated circuits are produced in large batches on a single wafer of electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through processes such as photolithography. The wafer is cut (diced) into many pieces, each containing one copy of the circuit. Each of these pieces is called a die. There are three commonly used plural forms: dice, dies, and die. To simplify handling and integration onto a printed circuit board, most dies are packaged in various forms. Manufacturing process Most dies are composed of silicon and used for integrated circuits. The process begins with the production of monocrystalline silicon ingots. These ingots are then sliced into disks with a diameter of up to 300 mm. These wafers are then polished to a mirror finish before going through photolithography. In many steps the transistors are manufactured and connected with metal interconnect layers. These prepared wafers then go through wafer testing to test their functionality. The wafers are then sliced and sorted to filter out the faulty dies. Functional dies are then packaged and the completed integrated circuit is ready to be shipped. Uses A die can host many types of circuits. One common use case of an integrated circuit die is in the form of a Central Processing Unit (CPU). Through advances in modern technology, the size of the transistor within the die has shrunk exponentially, following Moore's Law. Other uses for dies can range from LED lighting to power semiconductor devices. Images See also Die preparation Integrated circuit design Wire bonding and ball bonding" https://en.wikipedia.org/wiki/Flat-panel%20display,"A flat-panel display (FPD) is an electronic display used to display visual content such as text or images. It is present in consumer, medical, transportation, and industrial equipment. Flat-panel displays are thin, lightweight, provide better linearity and are capable of higher resolution than typical consumer-grade TVs from earlier eras. They are usually less than thick. While the highest resolution for consumer-grade CRT televisions was 1080i, many flat-panel displays in the 2020s are capable of 1080p and 4K resolution. In the 2010s, portable consumer electronics such as laptops, mobile phones, and portable cameras have used flat-panel displays since they consume less power and are lightweight. As of 2016, flat-panel displays have almost completely replaced CRT displays. Most 2010s-era flat-panel displays use LCD or light-emitting diode (LED) technologies, sometimes combined. Most LCD screens are back-lit with color filters used to display colors. In many cases, flat-panel displays are combined with touch screen technology, which allows the user to interact with the display in a natural manner. For example, modern smartphone displays often use OLED panels, with capacitive touch screens. Flat-panel displays can be divided into two display device categories: volatile and static. The former requires that pixels be periodically electronically refreshed to retain their state (e.g. liquid-crystal displays (LCD)), and can only show an image when it has power. On the other hand, static flat-panel displays rely on materials whose color states are bistable, such as displays that make use of e-ink technology, and as such retain content even when power is removed. History The first engineering proposal for a flat-panel TV was by General Electric in 1954 as a result of its work on radar monitors. The publication of their findings gave all the basics of future flat-panel TVs and monitors. But GE did not continue with the R&D required and never built a working flat panel a" https://en.wikipedia.org/wiki/Leibniz%27s%20notation,"In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols and to represent infinitely small (or infinitesimal) increments of and , respectively, just as and represent finite increments of and , respectively. Consider as a function of a variable , or = . If this is the case, then the derivative of with respect to , which later came to be viewed as the limit was, according to Leibniz, the quotient of an infinitesimal increment of by an infinitesimal increment of , or where the right hand side is Joseph-Louis Lagrange's notation for the derivative of at . The infinitesimal increments are called . Related to this is the integral in which the infinitesimal increments are summed (e.g. to compute lengths, areas and volumes as sums of tiny pieces), for which Leibniz also supplied a closely related notation involving the same differentials, a notation whose efficiency proved decisive in the development of continental European mathematics. Leibniz's concept of infinitesimals, long considered to be too imprecise to be used as a foundation of calculus, was eventually replaced by rigorous concepts developed by Weierstrass and others in the 19th century. Consequently, Leibniz's quotient notation was re-interpreted to stand for the limit of the modern definition. However, in many instances, the symbol did seem to act as an actual quotient would and its usefulness kept it popular even in the face of several competing notations. Several different formalisms were developed in the 20th century that can give rigorous meaning to notions of infinitesimals and infinitesimal displacements, including nonstandard analysis, tangent space, O notation and others. The derivatives and integrals of calculus can be packaged into the modern theory of differential forms, in which the derivative is genuinely a ratio of two differentials, and the integral likewise behaves in exact accordance w" https://en.wikipedia.org/wiki/Flexible-fuel%20vehicle,"A flexible-fuel vehicle (FFV) or dual-fuel vehicle (colloquially called a flex-fuel vehicle) is an alternative fuel vehicle with an internal combustion engine designed to run on more than one fuel, usually gasoline blended with either ethanol or methanol fuel, and both fuels are stored in the same common tank. Modern flex-fuel engines are capable of burning any proportion of the resulting blend in the combustion chamber as fuel injection and spark timing are adjusted automatically according to the actual blend detected by a fuel composition sensor. This device is known as an oxygen sensor and it reads the oxygen levels in the stream of exhaust gasses, its signal enriching or leaning the fuel mixture going into the engine. Flex-fuel vehicles are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks and the engine runs on one fuel at a time, for example, compressed natural gas (CNG), liquefied petroleum gas (LPG), or hydrogen. The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with about 60 million automobiles, motorcycles and light duty trucks manufactured and sold worldwide by March 2018, and concentrated in four markets, Brazil (30.5 million light-duty vehicles and over 6 million motorcycles), the United States (21 million by the end of 2017), Canada (1.6 million by 2014), and Europe, led by Sweden (243,100). In addition to flex-fuel vehicles running with ethanol, in Europe and the US, mainly in California, there have been successful test programs with methanol flex-fuel vehicles, known as M85 flex-fuel vehicles. There have been also successful tests using P-series fuels with E85 flex fuel vehicles, but as of June 2008, this fuel is not yet available to the general public. These successful tests with P-series fuels were conducted on Ford Taurus and Dodge Caravan flexible-fuel vehicles. Though technology exists to allow ethanol FFVs to run on any mixture of gasoline and ethanol, from pu" https://en.wikipedia.org/wiki/Bridging%20model,"In computer science, a bridging model is an abstract model of a computer which provides a conceptual bridge between the physical implementation of the machine and the abstraction available to a programmer of that machine; in other words, it is intended to provide a common level of understanding between hardware and software engineers. A successful bridging model is one which can be efficiently implemented in reality and efficiently targeted by programmers; in particular, it should be possible for a compiler to produce good code from a typical high-level language. The term was introduced by Leslie Valiant's 1990 paper A Bridging Model for Parallel Computation, which argued that the strength of the von Neumann model was largely responsible for the success of computing as a whole. The paper goes on to develop the bulk synchronous parallel model as an analogous model for parallel computing." https://en.wikipedia.org/wiki/Atari%20AMY,"The Atari AMY (or Amy) was a 64-oscillator additive synthesizer implemented as a single-IC sound chip. It was initially developed as part of a new advanced chipset, codenamed ""Rainbow"" that included a graphics processor and sprite generator. Rainbow was considered for use in the 16/32-bit workstation known as Sierra, but the Sierra project was bogged down in internal committee meetings. However the Rainbow chipset development continued up until Atari's CED and HCD divisions were sold to Tramel Technologies, Ltd. For a time, AMY was slated to be included in the Atari 520ST, then an updated version of the Atari 8-bit family, the 65XEM, but development was discontinued. The technology was later sold, but when the new owners started to introduce it as a professional synthesizer, Atari sued, and work on the project ended. Description The AMY was based around a bank of 64 oscillators, which emit sine waves of a specified frequency. The sine waves were created by looking up the amplitude at a given time from a 16-bit table stored in ROM, rather than calculating the amplitude using math hardware. The signals could then be mixed together to perform additive synthesis. The AMY also included a number of ramp generators that could be used to smoothly modify the amplitude or frequency of a given oscillator over a given time. During the design phase, it was believed these would be difficult to implement in hardware, so only eight frequency ramps are included. Sounds were created by selecting one of the oscillators to be the master channel, and then attaching other oscillators and ramps to it, slaved to some multiple of the fundamental frequency. Sound programs then sent the AMY a series of instructions setting the master frequency, and instructions on how quickly to ramp to new values. The output of the multiple oscillators was then summed and sent to the output. The AMY allowed the oscillators to be combined in any fashion, two at a time, to produce up to eight output channe" https://en.wikipedia.org/wiki/List%20of%20mathematical%20proofs,"A list of articles with mathematical proofs: Theorems of which articles are primarily devoted to proving them Bertrand's postulate and a proof Estimation of covariance matrices Fermat's little theorem and some proofs Gödel's completeness theorem and its original proof Mathematical induction and a proof Proof that 0.999... equals 1 Proof that 22/7 exceeds π Proof that e is irrational Proof that π is irrational Proof that the sum of the reciprocals of the primes diverges Articles devoted to theorems of which a (sketch of a) proof is given Banach fixed-point theorem Banach–Tarski paradox Basel problem Bolzano–Weierstrass theorem Brouwer fixed-point theorem Buckingham π theorem (proof in progress) Burnside's lemma Cantor's theorem Cantor–Bernstein–Schroeder theorem Cayley's formula Cayley's theorem Clique problem (to do) Compactness theorem (very compact proof) Erdős–Ko–Rado theorem Euler's formula Euler's four-square identity Euler's theorem Five color theorem Five lemma Fundamental theorem of arithmetic Gauss–Markov theorem (brief pointer to proof) Gödel's incompleteness theorem Gödel's first incompleteness theorem Gödel's second incompleteness theorem Goodstein's theorem Green's theorem (to do) Green's theorem when D is a simple region Heine–Borel theorem Intermediate value theorem Itô's lemma Kőnig's lemma Kőnig's theorem (set theory) Kőnig's theorem (graph theory) Lagrange's theorem (group theory) Lagrange's theorem (number theory) Liouville's theorem (complex analysis) Markov's inequality (proof of a generalization) Mean value theorem Multivariate normal distribution (to do) Holomorphic functions are analytic Pythagorean theorem Quadratic equation Quotient rule Ramsey's theorem Rao–Blackwell theorem Rice's theorem Rolle's theorem Splitting lemma squeeze theorem Sum rule in differentiation Sum rule in integration Sylow theorems Transcendence of e and π (as corollaries of Lindemann–Weierstrass) Tychonoff's theorem (to do) Ultrafilter lemma Ultraparallel theorem " https://en.wikipedia.org/wiki/List%20of%20BioBlitzes%20in%20New%20Zealand,"This is a list of BioBlitzes that have been held in New Zealand. The date is the first day of the BioBlitz if held over several days. This list only includes those that were major public events. BioBlitz was established in New Zealand by Manaaki Whenua - Landcare Research initially based on seed funding from The Royal Society of NZ's ""Science & Technology Promotion Fund 2003/2004"". BioBlitz events have always been a collaborative activity of professional and amateur taxonomic experts from multiple organisations and the public. Auckland BioBlitz events were coordinated by Manaaki Whenua, later from 2015 moving to events coordinated by Auckland Museum. The first events were 24 hours continuously, e.g. from 3 pm Friday overnight to 3 pm Saturday. Subsequently, this changed to 24 hours spread across mostly daylight hours over 2 consecutive days. For a series of downloadable posters for BioBlitz see: . See also: ." https://en.wikipedia.org/wiki/Form%20classification,"Form classification is the classification of organisms based on their morphology, which does not necessarily reflect their biological relationships. Form classification, generally restricted to palaeontology, reflects uncertainty; the goal of science is to move ""form taxa"" to biological taxa whose affinity is known. Form taxonomy is restricted to fossils that preserve too few characters for a conclusive taxonomic definition or assessment of their biological affinity, but whose study is made easier if a binomial name is available by which to identify them. The term ""form classification"" is preferred to ""form taxonomy""; taxonomy suggests that the classification implies a biological affinity, whereas form classification is about giving a name to a group of morphologically-similar organisms that may not be related. A ""parataxon"" (not to be confused with parataxonomy), or ""sciotaxon"" (Gr. ""shadow taxon""), is a classification based on incomplete data: for instance, the larval stage of an organism that cannot be matched up with an adult. It reflects a paucity of data that makes biological classification impossible. A sciotaxon is defined as a taxon thought to be equivalent to a true taxon (orthotaxon), but whose identity cannot be established because the two candidate taxa are preserved in different ways and thus cannot be compared directly. Examples In zoology Form taxa are groupings that are based on common overall forms. Early attempts at classification of labyrinthodonts was based on skull shape (the heavily armoured skulls often being the only preserved part). The amount of convergent evolution in the many groups lead to a number of polyphyletic taxa. Such groups are united by a common mode of life, often one that is generalist, in consequence acquiring generally similar body shapes by convergent evolution. Ediacaran biota — whether they are the precursors of the Cambrian explosion of the fossil record, or are unrelated to any modern phylum — can currently on" https://en.wikipedia.org/wiki/Radar%20ornithology,"Radar ornithology is the use of radar technology in studies of bird migration and in approaches to prevent bird strikes particularly to aircraft. The technique was developed from the observations of pale wisps seen moving on radar during the Second World War. These were termed as ""angels"", ""ghosts"", or ""phantoms"" in Britain and were later identified as being caused by migrating birds. Over time, the technology has been vastly improved with Doppler weather radars that allow the detection of birds, bats, as well as insects with resolution and sensitivity that is sufficient to quantify the speed of flaps that can sometimes aid in the identification of species. History According to David Lack, the earliest recorded use of radar in detecting birds came in 1940. The movements of gulls, herons and lapwings that caused some of the detentions was visually confirmed. It was however only in the 1950s through the work of Ernst Sutter at Zurich airport that more elusive ""angels"" were confirmed to be caused by small passerines. David Lack was one of the pioneers of radar ornithology in England. Applications Early radar ornithology mainly focused, due to limitations of the equipment, on the seasonality, timing, intensity, and direction of flocks of birds in migration. Modern weather radars can detect the wing area of the flying, the speed of flight, the frequency of wing beat, the direction, distance and altitude. The sensitivity and modern analytical techniques now allows detection of flying insects as well. Radar has been used to study seasonal variations in starling roosting behaviour. It has also been used to identify risks to aircraft operations at airports. The technique has been in conservation applications such as being used to assess the risk to birds by proposed wind energy installations, to quantify the number of birds at roost or nesting sites." https://en.wikipedia.org/wiki/Randomized%20benchmarking,"Randomized benchmarking is an experimental method for measuring the average error rates of quantum computing hardware platforms. The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations. Randomized benchmarking is the industry-standard protocol used by quantum hardware developers such as IBM and Google to test the performance of the quantum operations. The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators, considered the implementation of sequences of Haar-random operations, but this had several practical limitations. The now-standard protocol for randomized benchmarking (RB) relies on uniformly random Clifford operations, as proposed in 2006 by Dankert et al. as an application of the theory of unitary t-designs. In current usage randomized benchmarking sometimes refers to the broader family of generalizations of the 2005 protocol involving different random gate sets that can identify various features of the strength and type of errors affecting the elementary quantum gate operations. Randomized benchmarking protocols are an important means of verifying and validating quantum operations and are also routinely used for the optimization of quantum control procedures. Overview Randomized benchmarking offers several key advantages over alternative approaches to error characterization. For example, the number of experimental procedures required for full characterization of errors (called tomography) grows exponentially with the number of quantum bits (called qubits). This makes tomographic methods impractical for even small systems of just 3 or 4 qubits. In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases. Thus RB can be applied in practice to characterize errors in arbitrarily large quantum processors. Additionally, in experimental quantum comp" https://en.wikipedia.org/wiki/Sand%20table,"A sand table uses constrained sand for modelling or educational purposes. The original version of a sand table may be the abax used by early Greek students. In the modern era, one common use for a sand table is to make terrain models for military planning and wargaming. Abax An abax was a table covered with sand commonly used by students, particularly in Greece, to perform studies such as writing, geometry, and calculations. An abax was the predecessor to the abacus. Objects, such as stones, were added for counting and then columns for place-valued arithmetic. The demarcation between an abax and an abacus seems to be poorly defined in history; moreover, modern definitions of the word abacus universally describe it as a frame with rods and beads and, in general, do not include the definition of ""sand table"". The sand table may well have been the predecessor to some board games. (""The word abax, or abacus, is used both for the reckoning-board with its counters and the play-board with its pieces, ...""). Abax is from the old Greek for ""sand table"". Ghubar An Arabic word for sand (or dust) is ghubar (or gubar), and Western numerals (the decimal digits 0–9) are derived from the style of digits written on ghubar tables in North-West Africa and Iberia, also described as the 'West Arabic' or 'gubar' style. Military use Sand tables have been used for military planning and wargaming for many years as a field expedient, small-scale map, and in training for military actions. In 1890 a Sand table room was built at the Royal Military College of Canada for use in teaching cadets military tactics; this replaced the old sand table room in a pre-college building, in which the weight of the sand had damaged the floor. The use of sand tables increasingly fell out of favour with improved maps, aerial and satellite photography, and later, with digital terrain simulations. More modern sand tables have incorporated Augmented Reality, such as the Augmented Reality Sandtable (ARES) deve" https://en.wikipedia.org/wiki/Big%20O%20in%20probability%20notation,"The order in probability notation is used in probability theory and statistical theory in direct parallel to the big-O notation that is standard in mathematics. Where the big-O notation deals with the convergence of sequences or sets of ordinary numbers, the order in probability notation deals with convergence of sets of random variables, where convergence is in the sense of convergence in probability. Definitions Small o: convergence in probability For a set of random variables Xn and a corresponding set of constants an (both indexed by n, which need not be discrete), the notation means that the set of values Xn/an converges to zero in probability as n approaches an appropriate limit. Equivalently, Xn = op(an) can be written as Xn/an = op(1), i.e. for every positive ε. Big O: stochastic boundedness The notation means that the set of values Xn/an is stochastically bounded. That is, for any ε > 0, there exists a finite M > 0 and a finite N > 0 such that Comparison of the two definitions The difference between the definitions is subtle. If one uses the definition of the limit, one gets: Big : Small : The difference lies in the : for stochastic boundedness, it suffices that there exists one (arbitrary large) to satisfy the inequality, and is allowed to be dependent on (hence the ). On the other hand, for convergence, the statement has to hold not only for one, but for any (arbitrary small) . In a sense, this means that the sequence must be bounded, with a bound that gets smaller as the sample size increases. This suggests that if a sequence is , then it is , i.e. convergence in probability implies stochastic boundedness. But the reverse does not hold. Example If is a stochastic sequence such that each element has finite variance, then (see Theorem 14.4-1 in Bishop et al.) If, moreover, is a null sequence for a sequence of real numbers, then converges to zero in probability by Chebyshev's inequality, so" https://en.wikipedia.org/wiki/List%20of%20theoretical%20physicists,"The following is a partial list of notable theoretical physicists. Arranged by century of birth, then century of death, then year of birth, then year of death, then alphabetically by surname. For explanation of symbols, see Notes at end of this article. Ancient times Kaṇāda (6th century BCE or 2nd century BCE) Thales (c. 624 – c. 546 BCE) Pythagoras^* (c. 570 – c. 495 BCE) Democritus° (c. 460 – c. 370 BCE) Aristotle‡ (384–322 BCE) Archimedesº* (c. 287 – c. 212 BCE) Hypatia^ªº (c. 350–370; died 415 AD) Middle Ages Al Farabi (c. 872 – c. 950) Ibn al-Haytham (c. 965 – c. 1040) Al Beruni (c. 973 – c. 1048) Omar Khayyám (c. 1048 – c. 1131) Bhaskara II (c.1114 - c.1185) Nasir al-Din Tusi (1201–1274) Jean Buridan  (1301 – c. 1359/62) Nicole Oresme (c. 1320 – 1325 –1382) Sigismondo Polcastro (1384–1473) 15th–16th century Nicolaus Copernicusº (1473–1543) 16th century and 16th–17th centuries Gerolamo Cardano (1501–1576) Tycho Brahe (1546–1601) Giordano Bruno (1548–1600) Galileo Galileiº* (1564–1642) Johannes Keplerº (1571–1630) Benedetto Castelli (1578–1643) René Descartes‡^ (1596–1650) Bonaventura Cavalieri (1598–1647) 17th century Pierre de Fermat (1607–1665) Evangelista Torricelli (1608–1647) Giovanni Alfonso Borelli (1608–1679) Francesco Maria Grimaldi (1618–1663) Jacques Rohault (1618–1672) Blaise Pascal^ (1623–1662) Erhard Weigel (1625–1699) Christiaan Huygens^ (1629–1695) Ignace-Gaston Pardies (1636–1673) 17th–18th centuries Vincenzo Viviani (1622–1703) Isaac Newton^*º (1642–1727) Gottfried Leibniz^ (1646–1716) Edmond Pourchot (1651–1734) Jacob Bernoulli (1655–1705) Edmond Halley (1656–1742) Luigi Guido Grandi (1671–1742) Jakob Hermann (1678–1733) Jean-Jacques d'Ortous de Mairan (1678–1771) Nicolaus II Bernoulli (1695–1726) Pierre Louis Maupertuis (1698–1759) Daniel Bernoulli (1700–1782) 18th century Leonhard Euler^ (1707–1783) Vincenzo Riccati (1707–1785) Mikhail Lomonosov (1711–1765) Laura Bassiª* (1711–17" https://en.wikipedia.org/wiki/Acclimatization,"Acclimatization or acclimatisation (also called acclimation or acclimatation) is the process in which an individual organism adjusts to a change in its environment (such as a change in altitude, temperature, humidity, photoperiod, or pH), allowing it to maintain fitness across a range of environmental conditions. Acclimatization occurs in a short period of time (hours to weeks), and within the organism's lifetime (compared to adaptation, which is evolution, taking place over many generations). This may be a discrete occurrence (for example, when mountaineers acclimate to high altitude over hours or days) or may instead represent part of a periodic cycle, such as a mammal shedding heavy winter fur in favor of a lighter summer coat. Organisms can adjust their morphological, behavioral, physical, and/or biochemical traits in response to changes in their environment. While the capacity to acclimate to novel environments has been well documented in thousands of species, researchers still know very little about how and why organisms acclimate the way that they do. Names The nouns acclimatization and acclimation (and the corresponding verbs acclimatize and acclimate) are widely regarded as synonymous, both in general vocabulary and in medical vocabulary. The synonym acclimatation is less commonly encountered, and fewer dictionaries enter it. Methods Biochemical In order to maintain performance across a range of environmental conditions, there are several strategies organisms use to acclimate. In response to changes in temperature, organisms can change the biochemistry of cell membranes making them more fluid in cold temperatures and less fluid in warm temperatures by increasing the number of membrane proteins. In response to certain stressors, some organisms express so-called heat shock proteins that act as molecular chaperones and reduce denaturation by guiding the folding and refolding of proteins. It has been shown that organisms which are acclimated to high or low t" https://en.wikipedia.org/wiki/Proof-carrying%20code,"Proof-carrying code (PCC) is a software mechanism that allows a host system to verify properties about an application via a formal proof that accompanies the application's executable code. The host system can quickly verify the validity of the proof, and it can compare the conclusions of the proof to its own security policy to determine whether the application is safe to execute. This can be particularly useful in ensuring memory safety (i.e. preventing issues like buffer overflows). Proof-carrying code was originally described in 1996 by George Necula and Peter Lee. Packet filter example The original publication on proof-carrying code in 1996 used packet filters as an example: a user-mode application hands a function written in machine code to the kernel that determines whether or not an application is interested in processing a particular network packet. Because the packet filter runs in kernel mode, it could compromise the integrity of the system if it contains malicious code that writes to kernel data structures. Traditional approaches to this problem include interpreting a domain-specific language for packet filtering, inserting checks on each memory access (software fault isolation), and writing the filter in a high-level language which is compiled by the kernel before it is run. These approaches have performance disadvantages for code as frequently run as a packet filter, except for the in-kernel compilation approach, which only compiles the code when it is loaded, not every time it is executed. With proof-carrying code, the kernel publishes a security policy specifying properties that any packet filter must obey: for example, will not access memory outside of the packet and its scratch memory area. A theorem prover is used to show that the machine code satisfies this policy. The steps of this proof are recorded and attached to the machine code which is given to the kernel program loader. The program loader can then rapidly validate the proof, allowing i" https://en.wikipedia.org/wiki/Microvia,"Microvias are used as the interconnects between layers in high density interconnect (HDI) substrates and printed circuit boards (PCBs) to accommodate the high input/output (I/O) density of advanced packages. Driven by portability and wireless communications, the electronics industry strives to produce affordable, light, and reliable products with increased functionality. At the electronic component level, this translates to components with increased I/Os with smaller footprint areas (e.g. flip-chip packages, chip-scale packages, and direct chip attachments), and on the printed circuit board and package substrate level, to the use of high density interconnects (HDIs) (e.g. finer lines and spaces, and smaller vias). Overview IPC standards revised the definition of a microvia in 2013 to a hole with depth to diameter aspect ratio of 1:1 or less, and the hole depth not to exceed 0.25mm. Previously, microvia was any hole less than or equal to 0.15 mm in diameter With the advent of smartphones and hand-held electronic devices, microvias have evolved from single-level to stacked microvias that cross over multiple HDI layers. Sequential build-up (SBU) technology is used to fabricate HDI boards. The HDI layers are usually built up from a traditionally manufactured double-sided core board or multilayer PCB. The HDI layers are built on both sides of the traditional PCB one by one with microvias. The SBU process consists of several steps: layer lamination, via formation, via metallization, and via filling. There are multiple choices of materials and/or technologies for each step. Microvias can be filled with different materials and processes: Filled with epoxy resin (b-stage) during a sequential lamination process step Filled with non-conductive or conductive material other than copper as a separate processing step Plated closed with electroplated copper Screen printed closed with a copper paste Buried microvias are required to be filled, while blind microvias on the " https://en.wikipedia.org/wiki/Utility%20computing,"Utility computing, or computer utility, is a service provisioning model in which a service provider makes computing resources and infrastructure management available to the customer as needed, and charges them for specific usage rather than a flat rate. Like other types of on-demand computing (such as grid computing), the utility model seeks to maximize the efficient use of resources and/or minimize associated costs. Utility is the packaging of system resources, such as computation, storage and services, as a metered service. This model has the advantage of a low or no initial cost to acquire computer resources; instead, resources are essentially rented. This repackaging of computing services became the foundation of the shift to ""on demand"" computing, software as a service and cloud computing models that further propagated the idea of computing, application and network as a service. There was some initial skepticism about such a significant shift. However, the new model of computing caught on and eventually became mainstream. IBM, HP and Microsoft were early leaders in the new field of utility computing, with their business units and researchers working on the architecture, payment and development challenges of the new computing model. Google, Amazon and others started to take the lead in 2008, as they established their own utility services for computing, storage and applications. Utility computing can support grid computing which has the characteristic of very large computations or sudden peaks in demand which are supported via a large number of computers. ""Utility computing"" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the ""back end"" to make this possible. These might be a dedicated computer cluster specifically built for the purpose of being rented out, or even an under-utilized supercomputer. " https://en.wikipedia.org/wiki/Navier%E2%80%93Stokes%20equations,"The Navier–Stokes equations ( ) are partial differential equations which describe the motion of viscous fluid substances, named after French engineer and physicist Claude-Louis Navier and Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes). The Navier–Stokes equations mathematically express momentum balance and conservation of mass for Newtonian fluids. They are sometimes accompanied by an equation of state relating pressure, temperature and density. They arise from applying Isaac Newton's second law to fluid motion, together with the assumption that the stress in the fluid is the sum of a diffusing viscous term (proportional to the gradient of velocity) and a pressure term—hence describing viscous flow. The difference between them and the closely related Euler equations is that Navier–Stokes equations take viscosity into account while the Euler equations model only inviscid flow. As a result, the Navier–Stokes are a parabolic equation and therefore have better analytic properties, at the expense of having less mathematical structure (e.g. they are never completely integrable). The Navier–Stokes equations are useful because they describe the physics of many phenomena of scientific and engineering interest. They may be used to model the weather, ocean currents, water flow in a pipe and air flow around a wing. The Navier–Stokes equations, in their full and simplified forms, help with the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of pollution, and many other problems. Coupled with Maxwell's equations, they can be used to model and study magnetohydrodynamics. The Navier–Stokes equations are also of great interest in a purely mathematical sense. Despite their wide range of practical uses, it has not yet been proven whether smooth solutions always exist in three dimensions—i.e., whether t" https://en.wikipedia.org/wiki/ITU-T%20Study%20Group%2015,"The ITU-T Study Group 15 (SG15) 'Transport' is a standardization committee of ITU-T concerned with networks, technologies and infrastructures for transport, access and home. It responsible for standards such as GPON, G.fast, etc. Administratively, SG15 is a statutory meeting of the World Telecommunication Standardization Assembly (WTSA), which creates the ITU-T Study Groups and appoints their management teams. The secretariat is provided by the Telecommunication Standardization Bureau (under Director Chaesub Lee). The goal of SG15 is to produce recommendations (international standards) for networks. Area of work SG15 focuses on developing standards and recommendations related to optical transport networks, access network transport, and associated technologies. Some of the key responsibilities of SG15 include: Developing international standards for optical and transport networks, which covers fiber-optic communication systems, dense wavelength division multiplexing (DWDM), and synchronization aspects. Addressing issues related to access network transport, such as digital subscriber lines (DSL), gigabit-capable passive optical networks (GPON), and Ethernet passive optical networks (EPON). Developing recommendations for network management, control, and performance monitoring, as well as resilience, protection, and restoration mechanisms. SG15 collaborates with other ITU-T study groups, regional standardization bodies, and industry stakeholders to ensure a comprehensive and coordinated approach to global telecommunication standardization. See also ITU-T" https://en.wikipedia.org/wiki/Ternary%20fission,"Ternary fission is a comparatively rare (0.2 to 0.4% of events) type of nuclear fission in which three charged products are produced rather than two. As in other nuclear fission processes, other uncharged particles such as multiple neutrons and gamma rays are produced in ternary fission. Ternary fission may happen during neutron-induced fission or in spontaneous fission (the type of radioactive decay). About 25% more ternary fission happens in spontaneous fission compared to the same fission system formed after thermal neutron capture, illustrating that these processes remain physically slightly different, even after the absorption of the neutron, possibly because of the extra energy present in the nuclear reaction system of thermal neutron-induced fission. Quaternary fission, at 1 per 10 million fissions, is also known (see below). Products The most common nuclear fission process is ""binary fission."" It produces two charged asymmetrical fission products with maximally probable charged product at 95±15 and 135±15 u atomic mass. However, in this conventional fission of large nuclei, the binary process happens merely because it is the most energetically probable. In anywhere from 2 to 4 fissions per 1000 in a nuclear reactor, the alternative ternary fission process produces three positively charged fragments (plus neutrons, which are not charged and not counted in this reckoning). The smallest of the charged products may range from so small a charge and mass as a single proton (Z=1), up to as large a fragment as the nucleus of argon (Z=18). Although particles as large as argon nuclei may be produced as the smaller (third) charged product in the usual ternary fission, the most common small fragments from ternary fission are helium-4 nuclei, which make up about 90% of the small fragment products. This high incidence is related to the stability (high binding energy) of the alpha particle, which makes more energy available to the reaction. The second-most common" https://en.wikipedia.org/wiki/Transistor%20count,"The transistor count is the number of transistors in an electronic device (typically on a single substrate or ""chip""). It is the most common measure of integrated circuit complexity (although the majority of transistors in modern microprocessors are contained in the cache memories, which consist mostly of the same memory cell circuits replicated many times). The rate at which MOS transistor counts have increased generally follows Moore's law, which observed that the transistor count doubles approximately every two years. However, being directly proportional to the area of a chip, transistor count does not represent how advanced the corresponding manufacturing technology is: a better indication of this is the transistor density (the ratio of a chip's transistor count to its area). , the highest transistor count in flash memory is Micron's 2terabyte (3D-stacked) 16-die, 232-layer V-NAND flash memory chip, with 5.3trillion floating-gate MOSFETs (3bits per transistor). The highest transistor count in a single chip processor is that of the deep learning processor Wafer Scale Engine 2 by Cerebras. It has 2.6trillion MOSFETs in 84 exposed fields (dies) on a wafer, manufactured using TSMC's 7 nm FinFET process. As of 2023, the GPU with the highest transistor count is AMD's MI300X, built on TSMC's N5 process and totalling 153 billion MOSFETs. The highest transistor count in a consumer microprocessor is 134billion transistors, in Apple's ARM-based dual-die M2 Ultra system on a chip, which is fabricated using TSMC's 5 nm semiconductor manufacturing process. In terms of computer systems that consist of numerous integrated circuits, the supercomputer with the highest transistor count was the Chinese-designed Sunway TaihuLight, which has for all CPUs/nodes combined ""about 400 trillion transistors in the processing part of the hardware"" and ""the DRAM includes about 12 quadrillion transistors, and that's about 97 percent of all the transistors."" To compare, the smallest comp" https://en.wikipedia.org/wiki/Radial%20immunodiffusion,"Radial immunodiffusion (RID), Mancini immunodiffusion or single radial immunodiffusion assay, is an immunodiffusion technique used in immunology to determine the quantity or concentration of an antigen in a sample. Description Preparation A solution containing antibody is added to a heated medium such as agar or agarose dissolved in buffered normal saline. The molten medium is then poured onto a microscope slide or into an open container, such as a Petri dish, and allowed to cool and form a gel. A solution containing the antigen is then placed in a well that is punched into the gel. The slide or container is then covered, closed or placed in a humidity box to prevent evaporation. The antigen diffuses radially into the medium, forming a circle of precipitin that marks the boundary between the antibody and the antigen. The diameter of the circle increases with time as the antigen diffuses into the medium, reacts with the antibody, and forms insoluble precipitin complexes. The antigen is quantitated by measuring the diameter of the precipitin circle and comparing it with the diameters of precipitin circles formed by known quantities or concentrations of the antigen. Antigen-antibody complexes are small and soluble when in antigen excess. Therefore, precipitation near the center of the circle is usually less dense than it is near the circle's outer edge, where antigen is less concentrated. Expansion of the circle reaches an endpoint and stops when free antigen is depleted and when antigen and antibody reach equivalence. However, the clarity and density of the circle's outer edge may continue to increase after the circle stops expanding. Interpretation For most antigens, the area and the square of the diameter of the circle at the circle's endpoint are directly proportional to the initial quantity of antigen and are inversely proportional to the concentration of antibody. Therefore, a graph that compares the quantities or concentrations of antigen in the origin" https://en.wikipedia.org/wiki/Intrusion%20tolerance,"Intrusion tolerance is a fault-tolerant design approach to defending information systems against malicious attacks. In that sense, it is also a computer security approach. Abandoning the conventional aim of preventing all intrusions, intrusion tolerance instead calls for triggering mechanisms that prevent intrusions from leading to a system security failure. Distributed computing In distributed computing there are two major variants of intrusion tolerance mechanisms: mechanisms based on redundancy, such as the Byzantine fault tolerance, as well as mechanisms based on intrusion detection as implemented in intrusion detection system) and intrusion reaction. Intrusion-tolerant server architectures Intrusion-tolerance has started to influence the design of server architectures in academic institutions, and industry. Examples of such server architectures include KARMA, Splunk IT Service Intelligence (ITSI), project ITUA, and the practical Byzantine Fault Tolerance (pBFT) model. See also Intrusion detection system evasion techniques" https://en.wikipedia.org/wiki/Necessity%20and%20sufficiency,"In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: ""If then "", is necessary for , because the truth of is guaranteed by the truth of . (Equivalently, it is impossible to have without , or the falsity of ensures the falsity of .) Similarly, is sufficient for , because being true always implies that is true, but not being true does not always imply that is not true. In general, a necessary condition is one (possibly one of multiple conditions) that must be present in order for another condition to occur, while a sufficient condition is one that produces the said condition. The assertion that a statement is a ""necessary and sufficient"" condition of another means that the former statement is true if and only if the latter is true. That is, the two statements must be either simultaneously true, or simultaneously false. In ordinary English (also natural language) ""necessary"" and ""sufficient"" indicate relations between conditions or states of affairs, not statements. For example, being a male is a necessary condition for being a brother, but it is not sufficient—while being a male sibling is a necessary and sufficient condition for being a brother. Any conditional statement consists of at least one sufficient condition and at least one necessary condition. In data analytics, necessity and sufficiency can refer to different causal logics, where Necessary Condition Analysis and Qualitative Comparative Analysis can be used as analytical techniques for examining necessity and sufficiency of conditions for a particular outcome of interest. Definitions In the conditional statement, ""if S, then N"", the expression represented by S is called the antecedent, and the expression represented by N is called the consequent. This conditional statement may be written in several equivalent ways, such as ""N if S"", ""S only if N"", ""S implies " https://en.wikipedia.org/wiki/Random%20flip-flop,"Random flip-flop (RFF) is a theoretical concept of a non-sequential logic circuit capable of generating true randomness. By definition, it operates as an ""ordinary"" edge-triggered clocked flip-flop, except that its clock input acts randomly and with probability p = 1/2. Unlike Boolean circuits, which behave deterministically, random flip-flop behaves non-deterministically. By definition, random flip-flop is electrically compatible with Boolean logic circuits. Together with them, RFF makes up a full set of logic circuits capable of performing arbitrary algorithms, namely to realize Probabilistic Turing machine. Symbol Random flip-flop comes in all varieties in which ordinary, edge triggered clocked flip-flop does, for example: D-type random flip-flop (DRFF). T-type random flip-flop (TRFF), JK-type random flip-flop (JKRFF), etc. Symbol for DRFF, TRFF and JKRFF are shown in the Fig. 1. While varieties are possible, not all of them are needed: a single RFF type can be used to emulate all other types. Emulation of one type of RFF by the other type of RFF can be done using the same additional gates circuitry as for ordinary flip-flops. Examples are shown in the Fig. 2. Practical realization of random flip-flip By definition, action of a theoretical RFF is truly random. This is difficult to achieve in practice and is probably best realized through use of physical randomness. A RFF, based on quantum-random effect of photon emission in semiconductor and subsequent detection, has been demonstrated to work well up to a clock frequency of 25 MHz. At a higher clock frequency, subsequent actions of the RFF become correlated. This RFF has been built using bulk components and the effort resulted only in a handful of units. Recently, a monolithic chip containing 2800 integrated RFFs based on quantum randomness has been demonstrated in Bipolar-CMOS-DMOS (BCD) process. Applications and prospects One straightforward application of a RFF is generation of random bits, as shown " https://en.wikipedia.org/wiki/EEMBC,"EEMBC, the Embedded Microprocessor Benchmark Consortium, is a non-profit, member-funded organization formed in 1997, focused on the creation of standard benchmarks for the hardware and software used in embedded systems. The goal of its members is to make EEMBC benchmarks an industry standard for evaluating the capabilities of embedded processors, compilers, and the associated embedded system implementations, according to objective, clearly defined, application-based criteria. EEMBC members may contribute to the development of benchmarks, vote at various stages before public distribution, and accelerate testing of their platforms through early access to benchmarks and associated specifications. Most Popular Benchmark Working Groups In chronological order of development: AutoBench 1.1 - single-threaded code for automotive, industrial, and general-purpose applications Networking - single-threaded code associated with moving packets in networking applications. MultiBench - multi-threaded code for testing scalability of multicore processors. CoreMark - measures the performance of central processing units (CPU) used in embedded systems BXBench - system benchmark measuring the web browsing user-experience, from the click/touch on a URL to final page rendered on the screen, and is not limited to measuring only JavaScript execution. AndEBench-Pro - system benchmark providing a standardized, industry-accepted method of evaluating Android platform performance. It's available for free download in Google Play. FPMark - multi-threaded code for both single- and double-precision floating-point workloads, as well as small, medium, and large data sets. ULPMark - energy-measuring benchmark for ultra-low power microcontrollers; benchmarks include ULPMark-Core (with a focus on microcontroller core activity and sleep modes) and ULPMark-Peripheral (with a focus on microcontroller peripheral activity such as Analog-to-digital converter, Serial Peripheral Interface Bus, Real-time " https://en.wikipedia.org/wiki/Outline%20of%20regression%20analysis,"The following outline is provided as an overview of and topical guide to regression analysis: Regression analysis – use of statistical techniques for learning about the relationship between one or more dependent variables (Y) and one or more independent variables (X). Overview articles Regression analysis Linear regression Non-statistical articles related to regression Least squares Linear least squares (mathematics) Non-linear least squares Least absolute deviations Curve fitting Smoothing Cross-sectional study Basic statistical ideas related to regression Conditional expectation Correlation Correlation coefficient Mean square error Residual sum of squares Explained sum of squares Total sum of squares Visualization Scatterplot Linear regression based on least squares General linear model Ordinary least squares Generalized least squares Simple linear regression Trend estimation Ridge regression Polynomial regression Segmented regression Nonlinear regression Generalized linear models Generalized linear models Logistic regression Multinomial logit Ordered logit Probit model Multinomial probit Ordered probit Poisson regression Maximum likelihood Cochrane–Orcutt estimation Computation Numerical methods for linear least squares Inference for regression models F-test t-test Lack-of-fit sum of squares Confidence band Coefficient of determination Multiple correlation Scheffé's method Challenges to regression modeling Autocorrelation Cointegration Multicollinearity Homoscedasticity and heteroscedasticity Lack of fit Non-normality of errors Outliers Diagnostics for regression models Regression model validation Studentized residual Cook's distance Variance inflation factor DFFITS Partial residual plot Partial regression plot Leverage Durbin–Watson statistic Condition number Formal aids to model selection Model selection Mallows's Cp Akaike information criterion Bayesian information criterion Hannan–Q" https://en.wikipedia.org/wiki/Computer%20bureau,"A computer bureau is a service bureau providing computer services. Computer bureaus developed during the early 1960s, following the development of time-sharing operating systems. These allowed the services of a single large and expensive mainframe computer to be divided up and sold as a fungible commodity. Development of telecommunications and the first modems encouraged the growth of computer bureau as they allowed immediate access to the computer facilities from a customer's own premises. The computer bureau model shrank during the 1980s, as cheap commodity computers, particularly the PC clone but also the minicomputer allowed services to be hosted on-premises. See also Batch processing Cloud computing Grid computing Service Bureau Corporation Utility computing" https://en.wikipedia.org/wiki/Network%20forensics,"Network forensics is a sub-branch of digital forensics relating to the monitoring and analysis of computer network traffic for the purposes of information gathering, legal evidence, or intrusion detection. Unlike other areas of digital forensics, network investigations deal with volatile and dynamic information. Network traffic is transmitted and then lost, so network forensics is often a pro-active investigation. Network forensics generally has two uses. The first, relating to security, involves monitoring a network for anomalous traffic and identifying intrusions. An attacker might be able to erase all log files on a compromised host; network-based evidence might therefore be the only evidence available for forensic analysis. The second form relates to law enforcement. In this case analysis of captured network traffic can include tasks such as reassembling transferred files, searching for keywords and parsing human communication such as emails or chat sessions. Two systems are commonly used to collect network data; a brute force ""catch it as you can"" and a more intelligent ""stop look listen"" method. Overview Network forensics is a comparatively new field of forensic science. The growing popularity of the Internet in homes means that computing has become network-centric and data is now available outside of disk-based digital evidence. Network forensics can be performed as a standalone investigation or alongside a computer forensics analysis (where it is often used to reveal links between digital devices or reconstruct how a crime was committed). Marcus Ranum is credited with defining Network forensics as ""the capture, recording, and analysis of network events in order to discover the source of security attacks or other problem incidents"". Compared to computer forensics, where evidence is usually preserved on disk, network data is more volatile and unpredictable. Investigators often only have material to examine if packet filters, firewalls, and intrusion dete" https://en.wikipedia.org/wiki/Biological%20target,"A biological target is anything within a living organism to which some other entity (like an endogenous ligand or a drug) is directed and/or binds, resulting in a change in its behavior or function. Examples of common classes of biological targets are proteins and nucleic acids. The definition is context-dependent, and can refer to the biological target of a pharmacologically active drug compound, the receptor target of a hormone (like insulin), or some other target of an external stimulus. Biological targets are most commonly proteins such as enzymes, ion channels, and receptors. Mechanism The external stimulus (i.e., the drug or ligand) physically binds to (""hits"") the biological target. The interaction between the substance and the target may be: noncovalent – A relatively weak interaction between the stimulus and the target where no chemical bond is formed between the two interacting partners and hence the interaction is completely reversible. reversible covalent – A chemical reaction occurs between the stimulus and target in which the stimulus becomes chemically bonded to the target, but the reverse reaction also readily occurs in which the bond can be broken. irreversible covalent – The stimulus is permanently bound to the target through irreversible chemical bond formation. Depending on the nature of the stimulus, the following can occur: There is no direct change in the biological target, but the binding of the substance prevents other endogenous substances (such as activating hormones) from binding to the target. Depending on the nature of the target, this effect is referred as receptor antagonism, enzyme inhibition, or ion channel blockade. A conformational change in the target is induced by the stimulus which results in a change in target function. This change in function can mimic the effect of the endogenous substance in which case the effect is referred to as receptor agonism (or channel or enzyme activation) or be the opposite of the endog" https://en.wikipedia.org/wiki/List%20of%20geometric%20topology%20topics,"This is a list of geometric topology topics. Low-dimensional topology Knot theory Knot (mathematics) Link (knot theory) Wild knots Examples of knots Unknot Trefoil knot Figure-eight knot (mathematics) Borromean rings Types of knots Torus knot Prime knot Alternating knot Hyperbolic link Knot invariants Crossing number Linking number Skein relation Knot polynomials Alexander polynomial Jones polynomial Knot group Writhe Quandle Seifert surface Braids Braid theory Braid group Kirby calculus Surfaces Genus (mathematics) Examples Positive Euler characteristic 2-disk Sphere Real projective plane Zero Euler characteristic Annulus Möbius strip Torus Klein bottle Negative Euler characteristic The boundary of the pretzel is a genus three surface Embedded/Immersed in Euclidean space Cross-cap Boy's surface Roman surface Steiner surface Alexander horned sphere Klein bottle Mapping class group Dehn twist Nielsen–Thurston classification Three-manifolds Moise's Theorem (see also Hauptvermutung) Poincaré conjecture Thurston elliptization conjecture Thurston's geometrization conjecture Hyperbolic 3-manifolds Spherical 3-manifolds Euclidean 3-manifolds, Bieberbach Theorem, Flat manifolds, Crystallographic groups Seifert fiber space Heegaard splitting Waldhausen conjecture Compression body Handlebody Incompressible surface Dehn's lemma Loop theorem (aka the Disk theorem) Sphere theorem Haken manifold JSJ decomposition Branched surface Lamination Examples 3-sphere Torus bundles Surface bundles over the circle Graph manifolds Knot complements Whitehead manifold Invariants Fundamental group Heegaard genus tri-genus Analytic torsion Manifolds in general Orientable manifold Connected sum Jordan-Schönflies theorem Signature (topology) Handle decomposition Handlebody h-cobordism theorem s-cobordism theorem Manifold decomposition Hilbert-Smith conjecture Mapping class group Orbifolds Examples Exotic sphere Homology sphere Lens space I-bundle See also topology glossary List of topo" https://en.wikipedia.org/wiki/Ramanujan%E2%80%93Soldner%20constant,"In mathematics, the Ramanujan–Soldner constant (also called the Soldner constant) is a mathematical constant defined as the unique positive zero of the logarithmic integral function. It is named after Srinivasa Ramanujan and Johann Georg von Soldner. Its value is approximately μ ≈ 1.45136923488338105028396848589202744949303228… Since the logarithmic integral is defined by then using we have thus easing calculation for numbers greater than μ. Also, since the exponential integral function satisfies the equation the only positive zero of the exponential integral occurs at the natural logarithm of the Ramanujan–Soldner constant, whose value is approximately ln(μ) ≈ 0.372507410781366634461991866… External links Mathematical constants Srinivasa Ramanujan" https://en.wikipedia.org/wiki/Scotobiology,"Scotobiology is the study of biology as directly and specifically affected by darkness, as opposed to photobiology, which describes the biological effects of light. Overview The science of scotobiology gathers together under a single descriptive heading a wide range of approaches to the study of the biology of darkness. This includes work on the effects of darkness on the behavior and metabolism of animals, plants, and microbes. Some of this work has been going on for over a century, and lays the foundation for understanding the importance of dark night skies, not only for humans but for all biological species. The great majority of biological systems have evolved in a world of alternating day and night and have become irrevocably adapted to and dependent on the daily and seasonally changing patterns of light and darkness. Light is essential for many biological activities such as sight and photosynthesis. These are the focus of the science of photobiology. But the presence of uninterrupted periods of darkness, as well as their alternation with light, is just as important to biological behaviour. Scotobiology studies the positive responses of biological systems to the presence of darkness, and not merely the negative effects caused by the absence of light. Effects of darkness Many of the biological and behavioural activities of plants, animals (including birds and amphibians), insects, and microorganisms are either adversely affected by light pollution at night or can only function effectively either during or as the consequence of nightly darkness. Such activities include foraging, breeding and social behavior in higher animals, amphibians, and insects, which are all affected in various ways if light pollution occurs in their environment. These are not merely photobiological phenomena; light pollution acts by interrupting critical dark-requiring processes. But perhaps the most important scotobiological phenomena relate to the regular periodic alternation of" https://en.wikipedia.org/wiki/ARKive,"ARKive was a global initiative with the mission of ""promoting the conservation of the world's threatened species, through the power of wildlife imagery"", which it did by locating and gathering films, photographs and audio recordings of the world's species into a centralised digital archive. Its priority was the completion of audio-visual profiles for the c. 17,000 species on the IUCN Red List of Threatened Species. The project was an initiative of Wildscreen, a UK-registered educational charity, based in Bristol. The technical platform was created by Hewlett-Packard, as part of the HP Labs' Digital Media Systems research programme. ARKive had the backing of leading conservation organisations, including BirdLife International, Conservation International, International Union for Conservation of Nature (IUCN), the United Nations' World Conservation Monitoring Centre (UNEP-WCMC), and the World Wide Fund for Nature (WWF), as well as leading academic and research institutions, such as the Natural History Museum; Royal Botanic Gardens, Kew; and the Smithsonian Institution. It was a member of the Institutional Council of the Encyclopedia of Life. Two ARKive layers for Google Earth, featuring endangered species and species in the Gulf of Mexico were produced by Google Earth Outreach. The first of these was launched in April 2008 by Wildscreen's Patron, Sir David Attenborough. The website closed on 15 February 2019; its collection of images and videos remains securely stored for future generations. History The project formally was launched on 20 May 2003 by its patron, the UK-based natural history presenter, Sir David Attenborough, a long-standing colleague and friend of its chief instigator, the late Christopher Parsons, a former Head of the BBC Natural History Unit. Parsons never lived to see the fruition of the project, succumbing to cancer in November 2002 at the age of 70. Parsons identified a need to provide a centralised safe haven for wildlife films and photogr" https://en.wikipedia.org/wiki/Biosignature,"A biosignature (sometimes called chemical fossil or molecular fossil) is any substance – such as an element, isotope, molecule, or phenomenon that provides scientific evidence of past or present life. Measurable attributes of life include its complex physical or chemical structures and its use of free energy and the production of biomass and wastes. A biosignature can provide evidence for living organisms outside the Earth and can be directly or indirectly detected by searching for their unique byproducts. Types In general, biosignatures can be grouped into ten broad categories: Isotope patterns: Isotopic evidence or patterns that require biological processes. Chemistry: Chemical features that require biological activity. Organic matter: Organics formed by biological processes. Minerals: Minerals or biomineral-phases whose composition and/or morphology indicate biological activity (e.g., biomagnetite). Microscopic structures and textures: Biologically formed cements, microtextures, microfossils, and films. Macroscopic physical structures and textures: Structures that indicate microbial ecosystems, biofilms (e.g., stromatolites), or fossils of larger organisms. Temporal variability: Variations in time of atmospheric gases, reflectivity, or macroscopic appearance that indicates life's presence. Surface reflectance features: Large-scale reflectance features due to biological pigments could be detected remotely. Atmospheric gases: Gases formed by metabolic and/or aqueous processes, which may be present on a planet-wide scale. Technosignatures: Signatures that indicate a technologically advanced civilization. Viability Determining whether a potential biosignature is worth investigating is a fundamentally complicated process. Scientists must consider any and every possible alternate explanation before concluding that something is a true biosignature. This includes investigating the minute details that make other planets unique and understanding when there is a deviat" https://en.wikipedia.org/wiki/Permanent%20vegetative%20cover,"Permanent vegetative cover refers to trees, perennial bunchgrasses and grasslands, legumes, and shrubs with an expected life span of at least 5 years. In the United States, permanent cover is required on cropland entered into the Conservation Reserve Program." https://en.wikipedia.org/wiki/Causality%20%28physics%29,"Physical causality is a physical relationship between causes and effects. It is considered to be fundamental to all natural sciences and behavioural sciences, especially physics. Causality is also a topic studied from the perspectives of philosophy, statistics and logic. Causality means that an effect can not occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause can not have an effect outside its front (future) light cone. Macroscopic vs microscopic causality Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer fast than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version. Macroscopic causality In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, ""elsewhere"", have to commute, so the order of observations or measurements of such observables do not impact each other. Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observat" https://en.wikipedia.org/wiki/IP-XACT,"IP-XACT, also known as IEEE 1685, is an XML format that defines and describes individual, re-usable electronic circuit designs (individual pieces of intellectual property, or IPs) to facilitate their use in creating integrated circuits (i.e. microchips). IP-XACT was created by the SPIRIT Consortium as a standard to enable automated configuration and integration through tools and evolving into an IEEE standard. The goals of the standard are to ensure delivery of compatible component descriptions, such as IPs, from multiple component vendors, to enable exchanging complex component libraries between electronic design automation (EDA) tools for SoC design (design environments), to describe configurable components using metadata, and to enable the provision of EDA vendor-neutral scripts for component creation and configuration (generators, configurators). Approved as IEEE 1685-2009 on December 9, 2009, published on February 18, 2010. Superseded by IEEE 1685-2014. IEEE 1685-2009 was adopted as IEC 62014-4:2015. In June 2023, the supplemental material for standard IEEE 1685-2022 IP-XACT was approved by Accellera. Overview Conformance checks for eXtensible Markup Language (XML) data designed to describe electronic systems are formulated by this standard. The meta-data forms that are standardized include components, systems, bus interfaces and connections, abstractions of those buses, and details of the components including address maps, register and field descriptions, and file set descriptions for use in automating design, verification, documentation, and use flows for electronic systems. A set of XML schemas of the form described by the World Wide Web Consortium (W3C(R)) and a set of semantic consistency rules (SCRs) are included. A generator interface that is portable across tool environments is provided. The specified combination of methodology-independent meta-data and the tool-independent mechanism for accessing that data provides for portability of design d" https://en.wikipedia.org/wiki/Glossary%20of%20invasion%20biology%20terms,"The need for a clearly defined and consistent invasion biology terminology has been acknowledged by many sources. Invasive species, or invasive exotics, is a nomenclature term and categorization phrase used for flora and fauna, and for specific restoration-preservation processes in native habitats. Invasion biology is the study of these organisms and the processes of species invasion. The terminology in this article contains definitions for invasion biology terms in common usage today, taken from accessible publications. References for each definition are included. Terminology relates primarily to invasion biology terms with some ecology terms included to clarify language and phrases on linked articles. Introduction Definitions of ""invasive non-indigenous species have been inconsistent"", which has led to confusion both in literature and in popular publications (Williams and Meffe 2005). Also, many scientists and managers feel that there is no firm definition of non-indigenous species, native species, exotic species, ""and so on, and ecologists do not use the terms consistently."" (Shrader-Frechette 2001) Another question asked is whether current language is likely to promote ""effective and appropriate action"" towards invasive species through cohesive language (Larson 2005). Biologists today spend more time and effort on invasive species work because of the rapid spread, economic cost, and effects on ecological systems, so the importance of effective communication about invasive species is clear. (Larson 2005) Controversy in invasion biology terms exists because of past usage and because of preferences for certain terms. Even for biologists, defining a species as native may be far from being a straightforward matter of biological classification based on the location or the discipline a biologist is working in (Helmreich 2005). Questions often arise as to what exactly makes a species native as opposed to non-native, because some non-native species have no kno" https://en.wikipedia.org/wiki/Glycobiology,"Defined in the narrowest sense, glycobiology is the study of the structure, biosynthesis, and biology of saccharides (sugar chains or glycans) that are widely distributed in nature. Sugars or saccharides are essential components of all living things and aspects of the various roles they play in biology are researched in various medical, biochemical and biotechnological fields. History According to Oxford English Dictionary the specific term glycobiology was coined in 1988 by Prof. Raymond Dwek to recognize the coming together of the traditional disciplines of carbohydrate chemistry and biochemistry. This coming together was as a result of a much greater understanding of the cellular and molecular biology of glycans. However, as early as the late nineteenth century pioneering efforts were being made by Emil Fisher to establish the structure of some basic sugar molecules. Each year the Society of Glycobiology awards the Rosalind Kornfeld award for lifetime achievement in the field of glycobiology. Glycoconjugates Sugars may be linked to other types of biological molecule to form glycoconjugates. The enzymatic process of glycosylation creates sugars/saccharides linked to themselves and to other molecules by the glycosidic bond, thereby producing glycans. Glycoproteins, proteoglycans and glycolipids are the most abundant glycoconjugates found in mammalian cells. They are found predominantly on the outer cell membrane and in secreted fluids. Glycoconjugates have been shown to be important in cell-cell interactions due to the presence on the cell surface of various glycan binding receptors in addition to the glycoconjugates themselves. In addition to their function in protein folding and cellular attachment, the N-linked glycans of a protein can modulate the protein's function, in some cases acting as an on-off switch. Glycomics ""Glycomics, analogous to genomics and proteomics, is the systematic study of all glycan structures of a given cell type or organism"" and is" https://en.wikipedia.org/wiki/Embedded%20C%2B%2B,"Embedded C++ (EC++) is a dialect of the C++ programming language for embedded systems. It was defined by an industry group led by major Japanese central processing unit (CPU) manufacturers, including NEC, Hitachi, Fujitsu, and Toshiba, to address the shortcomings of C++ for embedded applications. The goal of the effort is to preserve the most useful object-oriented features of the C++ language yet minimize code size while maximizing execution efficiency and making compiler construction simpler. The official website states the goal as ""to provide embedded systems programmers with a subset of C++ that is easy for the average C programmer to understand and use"". Differences from C++ Embedded C++ excludes some features of C++. Some compilers, such as those from Green Hills and IAR Systems, allow certain features of ISO/ANSI C++ to be enabled in Embedded C++. IAR Systems calls this ""Extended Embedded C++"". Compilation An EC++ program can be compiled with any C++ compiler. But, a compiler specific to EC++ may have an easier time doing optimization. Compilers specific to EC++ are provided by companies such as: IAR Systems Freescale Semiconductor, (spin-off from Motorola in 2004 who had acquired Metrowerks in 1999) Tasking Software, part of Altium Limited Green Hills Software Criticism The language has had a poor reception with many expert C++ programmers. In particular, Bjarne Stroustrup says, ""To the best of my knowledge EC++ is dead (2004), and if it isn't it ought to be."" In fact, the official English EC++ website has not been updated since 2002. Nevertheless, a restricted subset of C++ (based on Embedded C++) has been adopted by Apple Inc. as the exclusive programming language to create all I/O Kit device drivers for Apple's macOS, iPadOS and iOS operating systems of the popular Macintosh, iPhone, and iPad products. Apple engineers felt the exceptions, multiple inheritance, templates, and runtime type information features of standard C++ were either insuffici" https://en.wikipedia.org/wiki/Thanatocoenosis,"Thanatocoenosis (from Greek language thanatos - death and koinos - common) are all the embedded fossils at a single discovery site. This site may be referred to as a ""death assemblage"". Such groupings are composed of fossils of organisms which may not have been associated during life, often originating from different habitats. Examples include marine fossils having been brought together by a water current or animal bones having been deposited by a predator. A site containing thanatocoenosis elements can also lose clarity in its faunal history by more recent intruding factors such as burrowing microfauna or stratigraphic disturbances born from anthropogenic methods. This term differs from a related term, biocoenosis, which refers to an assemblage in which all organisms within the community interacted and lived together in the same habitat while alive. A biocoenosis can lead to a thanatocoenosis if disrupted significantly enough to have its dead/fossilized matter scattered. A death community/thanatocoenosis is developed by multiple taphonomic processes (those being ones relating to the different ways in which organismal remains pass through strata and are decomposed and preserved) that are generally categorized into two groups: biostratinomy and diagenesis. As a whole, thanatocoenoses are divided into two categories as well: autochthonous and allochthonous. Death assemblages and thanatocoenoses can provide insight into the process of early-stage fossilization, as well as information about the species within a given ecosystem. The study of taphonomy can aid in furthering the understanding of the ecological past of species and their fossil records if used in conjunction with research on death assemblages from modern ecosystems. History The term ""thanatocoenosis"" was originally created by Erich Wasmund in 1926, and he was the first to define both the similarities and contrasts between these death communities and biocoenoses. Due to confusion between some distinctions" https://en.wikipedia.org/wiki/Data%20communication,"Data communication or digital communications, including data transmission and data reception, is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication using radio spectrum, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radiowave, microwave, or infrared signal. Analog transmission is a method of conveying voice, data, image, signal or video information using a continuous signal which varies in amplitude, phase, or some other property in proportion to that of a variable. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of continuously varying waveforms (passband transmission), using a digital modulation method. The passband modulation and corresponding demodulation is carried out by modem equipment. According to the most common definition of digital signal, both baseband and passband signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and passband transmission of digital data as a form of digital-to-analog conversion. Data transmitted may be digital messages originating from a data source, for example, a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream, for example, using pulse-code modulation or more advanced source coding schemes. This source coding and decoding is carried out by codec equipment. Distinction between related subjects Courses and textbooks in the field of data transmission as well as digital transmission and digital communications have similar content. Digital transmission or data transmission traditionally belongs to t" https://en.wikipedia.org/wiki/ISCSI%20Extensions%20for%20RDMA,"The iSCSI Extensions for RDMA (iSER) is a computer network protocol that extends the Internet Small Computer System Interface (iSCSI) protocol to use Remote Direct Memory Access (RDMA). RDMA is provided by either the Transmission Control Protocol (TCP) with RDMA services (iWARP) that uses existing Ethernet setup and therefore no need of huge hardware investment, RoCE (RDMA over Converged Ethernet) that does not need the TCP layer and therefore provides lower latency, or InfiniBand. It permits data to be transferred directly into and out of SCSI computer memory buffers (which connects computers to storage devices) without intermediate data copies and without much CPU intervention. History An RDMA consortium was announced on May 31, 2002, with a goal of product implementations by 2003. The consortium released their proposal in July, 2003. The protocol specifications were published as drafts in September 2004 in the Internet Engineering Task Force and issued as RFCs in October 2007. The OpenIB Alliance was renamed in 2007 to be the OpenFabrics Alliance, and then released an open source software package. Description The motivation for iSER is to use RDMA to avoid unnecessary data copying on the target and initiator. The Datamover Architecture (DA) defines an abstract model in which the movement of data between iSCSI end nodes is logically separated from the rest of the iSCSI protocol; iSER is one Datamover protocol. The interface between the iSCSI and a Datamover protocol, iSER in this case, is called Datamover Interface (DI). The main difference between the standard iSCSI and iSCSI over iSER is the execution of SCSI read/write commands. With iSER the target drives all data transfer (with the exception of iSCSI unsolicited data) by issuing RDMA write/read operations, respectively. When the iSCSI layer issues an iSCSI command PDU, it calls the Send_Control primitive, which is part of the DI. The Send_Control primitive sends the STag with the PDU. The iSER layer in " https://en.wikipedia.org/wiki/Etherloop,"Etherloop is a kind of DSL technology that combines the features of Ethernet and DSL. It allows the combination of voice and data transmission on standard phone lines. Under the right conditions it will allow speeds of up to 6 megabits per second over a distance of up to 6.4 km (21,000 feet). Etherloop uses half-duplex transmission, and as such, is less susceptible to interference caused by poor line quality, bridge taps, etc. Also, etherloop modems can train up through line filters (although it is not recommended to do this). Etherloop has been deployed by various internet service providers in areas where the loop length is very long or line quality is poor. Some Etherloop modems (those made by Elastic Networks) offer a ""Central Office mode"", in which two modems are connected back to back over a phone line and used as a LAN extension. An example of a situation where this would be done is to extend Ethernet to a building that is too far to reach with straight Ethernet. See also Ethernet in the first mile (especially 2BASE-TL)" https://en.wikipedia.org/wiki/Kawasaki%27s%20theorem,"Kawasaki's theorem or Kawasaki–Justin theorem is a theorem in the mathematics of paper folding that describes the crease patterns with a single vertex that may be folded to form a flat figure. It states that the pattern is flat-foldable if and only if alternatingly adding and subtracting the angles of consecutive folds around the vertex gives an alternating sum of zero. Crease patterns with more than one vertex do not obey such a simple criterion, and are NP-hard to fold. The theorem is named after one of its discoverers, Toshikazu Kawasaki. However, several others also contributed to its discovery, and it is sometimes called the Kawasaki–Justin theorem or Husimi's theorem after other contributors, Jacques Justin and Kôdi Husimi. Statement A one-vertex crease pattern consists of a set of rays or creases drawn on a flat sheet of paper, all emanating from the same point interior to the sheet. (This point is called the vertex of the pattern.) Each crease must be folded, but the pattern does not specify whether the folds should be mountain folds or valley folds. The goal is to determine whether it is possible to fold the paper so that every crease is folded, no folds occur elsewhere, and the whole folded sheet of paper lies flat. To fold flat, the number of creases must be even. This follows, for instance, from Maekawa's theorem, which states that the number of mountain folds at a flat-folded vertex differs from the number of valley folds by exactly two folds. Therefore, suppose that a crease pattern consists of an even number of creases, and let be the consecutive angles between the creases around the vertex, in clockwise order, starting at any one of the angles. Then Kawasaki's theorem states that the crease pattern may be folded flat if and only if the alternating sum and difference of the angles adds to zero: An equivalent way of stating the same condition is that, if the angles are partitioned into two alternating subsets, then the sum of the angles in eith" https://en.wikipedia.org/wiki/Poisson%20bracket,"In mathematics and classical mechanics, the Poisson bracket is an important binary operation in Hamiltonian mechanics, playing a central role in Hamilton's equations of motion, which govern the time evolution of a Hamiltonian dynamical system. The Poisson bracket also distinguishes a certain class of coordinate transformations, called canonical transformations, which map canonical coordinate systems into canonical coordinate systems. A ""canonical coordinate system"" consists of canonical position and momentum variables (below symbolized by and , respectively) that satisfy canonical Poisson bracket relations. The set of possible canonical transformations is always very rich. For instance, it is often possible to choose the Hamiltonian itself as one of the new canonical momentum coordinates. In a more general sense, the Poisson bracket is used to define a Poisson algebra, of which the algebra of functions on a Poisson manifold is a special case. There are other general examples, as well: it occurs in the theory of Lie algebras, where the tensor algebra of a Lie algebra forms a Poisson algebra; a detailed construction of how this comes about is given in the universal enveloping algebra article. Quantum deformations of the universal enveloping algebra lead to the notion of quantum groups. All of these objects are named in honor of Siméon Denis Poisson. Properties Given two functions and that depend on phase space and time, their Poisson bracket is another function that depends on phase space and time. The following rules hold for any three functions of phase space and time: Anticommutativity Bilinearity Leibniz's rule Jacobi identity Also, if a function is constant over phase space (but may depend on time), then for any . Definition in canonical coordinates In canonical coordinates (also known as Darboux coordinates) on the phase space, given two functions and , the Poisson bracket takes the form The Poisson brackets of the canonical coordinates are " https://en.wikipedia.org/wiki/Electronic%20symbol,"An electronic symbol is a pictogram used to represent various electrical and electronic devices or functions, such as wires, batteries, resistors, and transistors, in a schematic diagram of an electrical or electronic circuit. These symbols are largely standardized internationally today, but may vary from country to country, or engineering discipline, based on traditional conventions. Standards for symbols The graphic symbols used for electrical components in circuit diagrams are covered by national and international standards, in particular: IEC 60617 (also known as BS 3939). There is also IEC 61131-3 – for ladder-logic symbols. JIC JIC (Joint Industrial Council) symbols as approved and adopted by the NMTBA (National Machine Tool Builders Association). They have been extracted from the Appendix of the NMTBA Specification EGPl-1967. ANSI Y32.2-1975 (also known as IEEE Std 315-1975 or CSA Z99-1975). IEEE Std 91/91a: graphic symbols for logic functions (used in digital electronics). It is referenced in ANSI Y32.2/IEEE Std 315. Australian Standard AS 1102 (based on a slightly modified version of IEC 60617; withdrawn without replacement with a recommendation to use IEC 60617). The number of standards leads to confusion and errors. Symbols usage is sometimes unique to engineering disciplines, and national or local variations to international standards exist. For example, lighting and power symbols used as part of architectural drawings may be different from symbols for devices used in electronics. Common electronic symbols Symbols shown are typical examples, not a complete list. Traces Grounds The shorthand for ground is GND. Optionally, the triangle in the middle symbol may be filled in. Sources Resistors It is very common for potentiometer and rheostat symbols to be used for many types of variable resistors, including trimmers. Capacitors Diodes Optionally, the triangle in these symbols may be filled in. Note: The words anode and cathode typically " https://en.wikipedia.org/wiki/Inverted%20sugar%20syrup,"Inverted sugar syrup, also called invert syrup, invert sugar, simple syrup, sugar syrup, sugar water, bar syrup, syrup USP, or sucrose inversion, is a syrup mixture of the monosaccharides glucose and fructose, that is made by hydrolytic saccharification of the disaccharide sucrose. This mixture's optical rotation is opposite to that of the original sugar, which is why it is called an invert sugar. It is 1.3x sweeter than table sugar, and foods that contain invert sugar retain moisture better and crystallize less easily than do those that use table sugar instead. Bakers, who call it invert syrup, may use it more than other sweeteners. Production Plain water Inverted sugar syrup can be made without acids or enzymes by heating it up alone: two parts granulated sugar and one part water, simmered for five to seven minutes, will be partly inverted. The amount of water can be increased to increase the time it takes to reach the desired final temperature, and increasing the time increases the amount of inversion that occurs. In general, higher final temperatures result in thicker syrups, and lower final temperatures, in thinner ones. Additives Commercially prepared enzyme-catalyzed solutions are inverted at . The optimum pH for inversion is 5.0. Invertase is added at a rate of about 0.15% of the syrup's weight, and inversion time will be about 8 hours. When completed the syrup temperature is raised to inactivate the invertase, but the syrup is concentrated in a vacuum evaporator to preserve color. Though inverted sugar syrup can be made by heating table sugar in water alone, the reaction can be sped up by adding lemon juice, cream of tartar, or other catalysts, often without changing the flavor noticeably. Common sugar can be inverted quickly by mixing sugar and citric acid or cream of tartar at a ratio of about 1000:1 by weight and adding water. If lemon juice which is about five percent citric acid by weight is used instead then the ratio becomes 50:1. Such a mixtu" https://en.wikipedia.org/wiki/Point%20particle,"A point particle, ideal particle or point-like particle (often spelled pointlike particle) is an idealization of particles heavily used in physics. Its defining feature is that it lacks spatial extension; being dimensionless, it does not take up space. A point particle is an appropriate representation of any object whenever its size, shape, and structure are irrelevant in a given context. For example, from far enough away, any finite-size object will look and behave as a point-like object. Point masses and point charges, discussed below, are two common cases. When a point particle has an additive property, such as mass or charge, it is often represented mathematically by a Dirac delta function. In quantum mechanics, the concept of a point particle is complicated by the Heisenberg uncertainty principle, because even an elementary particle, with no internal structure, occupies a nonzero volume. For example, the atomic orbit of an electron in the hydrogen atom occupies a volume of ~. There is nevertheless a distinction between elementary particles such as electrons or quarks, which have no known internal structure, versus composite particles such as protons, which do have internal structure: A proton is made of three quarks. Elementary particles are sometimes called ""point particles"" in reference to their lack of internal structure, but this is in a different sense than discussed above. Point mass Point mass (pointlike mass) is the concept, for example in classical physics, of a physical object (typically matter) that has nonzero mass, and yet explicitly and specifically is (or is being thought of or modeled as) infinitesimal (infinitely small) in its volume or linear dimensions. In the theory of gravity, extended objects can behave as point-like even in their immediate vicinity. For example, spherical objects interacting in 3-dimensional space whose interactions are described by the Newtonian gravitation behave in such a way as if all their matter were concentrate" https://en.wikipedia.org/wiki/Session%20multiplexing,"Session multiplexing in a computer network is a service provided by the transport layer (see OSI Layered Model). It multiplexes several message streams, or sessions onto one logical link and keeps track of which messages belong to which sessions (see session layer). An example of session multiplexing—a single computer with one IP address has several websites open at once." https://en.wikipedia.org/wiki/Connection-oriented%20communication,"In telecommunications and computer networking, connection-oriented communication is a communication protocol where a communication session or a semi-permanent connection is established before any useful data can be transferred. The established connection ensures that data is delivered in the correct order to the upper communication layer. The alternative is called connectionless communication, such as the datagram mode communication used by Internet Protocol (IP) and User Datagram Protocol, where data may be delivered out of order, since different network packets are routed independently and may be delivered over different paths. Connection-oriented communication may be implemented with a circuit switched connection, or a packet-mode virtual circuit connection. In the latter case, it may use either a transport layer virtual circuit protocol such as the TCP protocol, allowing data to be delivered in order. Although the lower-layer switching is connectionless, or it may be a data link layer or network layer switching mode, where all data packets belonging to the same traffic stream are delivered over the same path, and traffic flows are identified by some connection identifier reducing the overhead of routing decisions on a packet-by-packet basis for the network. Connection-oriented protocol services are often, but not always, reliable network services that provide acknowledgment after successful delivery and automatic repeat request functions in case of missing or corrupted data. Asynchronous Transfer Mode, Frame Relay and MPLS are examples of a connection-oriented, unreliable protocol. SMTP is an example of a connection-oriented protocol in which if a message is not delivered, an error report is sent to the sender which makes SMTP a reliable protocol. Because they can keep track of a conversation, connection-oriented protocols are sometimes described as stateful. Circuit switching Circuit switched communication, for example the public switched telephone network, " https://en.wikipedia.org/wiki/List%20of%20moments%20of%20inertia,"Moment of inertia, denoted by , measures the extent to which an object resists rotational acceleration about a particular axis, it is the rotational analogue to mass (which determines an object's resistance to linear acceleration). The moments of inertia of a mass have units of dimension ML2 ([mass] × [length]2). It should not be confused with the second moment of area, which has units of dimension L4 ([length]4) and is used in beam calculations. The mass moment of inertia is often also known as the rotational inertia, and sometimes as the angular mass. For simple objects with geometric symmetry, one can often determine the moment of inertia in an exact closed-form expression. Typically this occurs when the mass density is constant, but in some cases the density can vary throughout the object as well. In general, it may not be straightforward to symbolically express the moment of inertia of shapes with more complicated mass distributions and lacking symmetry. When calculating moments of inertia, it is useful to remember that it is an additive function and exploit the parallel axis and perpendicular axis theorems. This article mainly considers symmetric mass distributions, with constant density throughout the object, and the axis of rotation is taken to be through the center of mass unless otherwise specified. Moments of inertia Following are scalar moments of inertia. In general, the moment of inertia is a tensor, see below. List of 3D inertia tensors This list of moment of inertia tensors is given for principal axes of each object. To obtain the scalar moments of inertia I above, the tensor moment of inertia I is projected along some axis defined by a unit vector n according to the formula: where the dots indicate tensor contraction and the Einstein summation convention is used. In the above table, n would be the unit Cartesian basis ex, ey, ez to obtain Ix, Iy, Iz respectively. See also List of second moments of area Parallel axis theorem Perpendicula" https://en.wikipedia.org/wiki/Array%20factor,"An array is simply a group of objects, and the array factor is a measure of how much a specific characteristic changes because of the grouping. This phenomenon is observed when antennas are grouped together. The radiation (or reception) pattern of the antenna group is considerably different from that of a single antenna. This is due to the constructive and destructive interference properties of radio waves. A well designed antenna array, allows the broadcast power to be directed to where it is needed most. These antenna arrays are typically one dimensional, as seen on collinear dipole arrays, or two dimensional as on military phased arrays. In order to simplify the mathematics, a number of assumptions are typically made: 1. all radiators are equal in every respect 2. all radiators are uniformly spaced 3. the signal phase shift between radiators is constant. The array factor is the complex-valued far-field radiation pattern obtained for an array of isotropic radiators located at coordinates , as determined by: where are the complex-valued excitation coefficients, and is the direction unit vector. The array factor is defined in the transmitting mode, with the time convention . A corresponding expression can be derived for the receiving mode, where a negative sign appears in the exponential factors, as derived in reference." https://en.wikipedia.org/wiki/Ecosystem%20model,"An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system. Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model). Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools. Types of models There are two major types of ecological models, which are generally applied to different types of problems: (1) analytic models and (2) simulation / computational models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutio" https://en.wikipedia.org/wiki/Cantor%20cube,"In mathematics, a Cantor cube is a topological group of the form {0, 1}A for some index set A. Its algebraic and topological structures are the group direct product and product topology over the cyclic group of order 2 (which is itself given the discrete topology). If A is a countably infinite set, the corresponding Cantor cube is a Cantor space. Cantor cubes are special among compact groups because every compact group is a continuous image of one, although usually not a homomorphic image. (The literature can be unclear, so for safety, assume all spaces are Hausdorff.) Topologically, any Cantor cube is: homogeneous; compact; zero-dimensional; AE(0), an absolute extensor for compact zero-dimensional spaces. (Every map from a closed subset of such a space into a Cantor cube extends to the whole space.) By a theorem of Schepin, these four properties characterize Cantor cubes; any space satisfying the properties is homeomorphic to a Cantor cube. In fact, every AE(0) space is the continuous image of a Cantor cube, and with some effort one can prove that every compact group is AE(0). It follows that every zero-dimensional compact group is homeomorphic to a Cantor cube, and every compact group is a continuous image of a Cantor cube." https://en.wikipedia.org/wiki/Traceability,"Traceability is the capability to trace something. In some cases, it is interpreted as the ability to verify the history, location, or application of an item by means of documented recorded identification. Other common definitions include the capability (and implementation) of keeping track of a given set or type of information to a given degree, or the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable. Traceability is applicable to measurement, supply chain, software development, healthcare and security. Measurement The term measurement traceability or metrological traceability is used to refer to an unbroken chain of comparisons relating an instrument's measurements to a known standard. Calibration to a traceable standard can be used to determine an instrument's bias, precision, and accuracy. It may also be used to show a chain of custody - from current interpretation of evidence to the actual evidence in a legal context, or history of handling of any information. In many countries, national standards for weights and measures are maintained by a National Metrological Institute (NMI) which provides the highest level of standards for the calibration / measurement traceability infrastructure in that country. Examples of government agencies include the National Physical Laboratory, UK (NPL) the National Institute of Standards and Technology (NIST) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, and the Instituto Nazionale di Ricerca Metrologica (INRiM) in Italy. As defined by NIST, ""Traceability of measurement requires the establishment of an unbroken chain of comparisons to stated references each with a stated uncertainty."" A clock providing is traceable to a time standard such as Coordinated Universal Time or International Atomic Time. The Global Positioning System is a source of traceable time. Supply chain Within a product's supply chain, traceability may be both a regulatory and an eth" https://en.wikipedia.org/wiki/Edinburgh%20BioQuarter,"Edinburgh BioQuarter is a key initiative in the development of Scotland's life sciences industry, which employs more than 39,000 people in over 750 organisations. A community of 8,000 people currently work and study within the boundary of BioQuarter, located on the south side of Edinburgh, Scotland’s capital city, approximately three miles from the city centre. This 160-acre site includes health innovation businesses, the University of Edinburgh Medical School, 900-bed Royal Infirmary of Edinburgh, and new Royal Hospital for Children and Young People and Department of Clinical Neurosciences. The site is also home to many of the University of Edinburgh’s medical research institutes. Partnership and Economic Impact BioQuarter is a partnership with four of Scotland’s most prominent organisations - the City of Edinburgh Council, NHS Lothian, Scottish Enterprise and the University of Edinburgh. Over the past three decades there has been over £600m investment in capital developments. BioQuarter has generated an estimated £2.72 billion in gross value added from its research, clinical and commercial activities, and a further £320 million from its development. History In 1997, the Scottish Government obtained planning permission for land in the Little France area of Edinburgh for a new Royal Infirmary of Edinburgh and it was procured under a Private Finance Initiative contract in 1998. This allowed the Royal Infirmary of Edinburgh and the University of Edinburgh’s Medical School to relocate from their historic sites in Edinburgh city centre. Development commenced immediately and in 2002 NHS Lothian opened the new Royal Infirmary of Edinburgh, a major acute teaching hospital. At the same time the University of Edinburgh completed its first phase of relocation of the College of Medicine and Veterinary Medicine with the move of medical teaching and research to the adjacent Chancellor’s Building. In 2004 Scottish Enterprise, Scotland’s economic development agency, had a" https://en.wikipedia.org/wiki/List%20of%20types%20of%20sets,"Sets can be classified according to the properties they have. Relative to set theory Empty set Finite set, Infinite set Countable set, Uncountable set Power set Relative to a topology Closed set Open set Clopen set Fσ set Gδ set Compact set Relatively compact set Regular open set, regular closed set Connected set Perfect set Meagre set Nowhere dense set Relative to a metric Bounded set Totally bounded set Relative to measurability Borel set Baire set Measurable set, Non-measurable set Universally measurable set Relative to a measure Negligible set Null set Haar null set In a linear space Convex set Balanced set, Absolutely convex set Relative to the real/complex numbers Fractal set Ways of defining sets/Relation to descriptive set theory Recursive set Recursively enumerable set Arithmetical set Diophantine set Hyperarithmetical set Analytical set Analytic set, Coanalytic set Suslin set Projective set Inhabited set More general objects still called sets Multiset icarus set See also Basic concepts in set theory Sets Set theory" https://en.wikipedia.org/wiki/Zinc%20in%20biology,"Zinc is an essential trace element for humans and other animals, for plants and for microorganisms. Zinc is required for the function of over 300 enzymes and 1000 transcription factors, and is stored and transferred in metallothioneins. It is the second most abundant trace metal in humans after iron and it is the only metal which appears in all enzyme classes. In proteins, zinc ions are often coordinated to the amino acid side chains of aspartic acid, glutamic acid, cysteine and histidine. The theoretical and computational description of this zinc binding in proteins (as well as that of other transition metals) is difficult. Roughly  grams of zinc are distributed throughout the human body. Most zinc is in the brain, muscle, bones, kidney, and liver, with the highest concentrations in the prostate and parts of the eye. Semen is particularly rich in zinc, a key factor in prostate gland function and reproductive organ growth. Zinc homeostasis of the body is mainly controlled by the intestine. Here, ZIP4 and especially TRPM7 were linked to intestinal zinc uptake essential for postnatal survival. In humans, the biological roles of zinc are ubiquitous. It interacts with ""a wide range of organic ligands"", and has roles in the metabolism of RNA and DNA, signal transduction, and gene expression. It also regulates apoptosis. A review from 2015 indicated that about 10% of human proteins (~3000) bind zinc, in addition to hundreds more that transport and traffic zinc; a similar in silico study in the plant Arabidopsis thaliana found 2367 zinc-related proteins. In the brain, zinc is stored in specific synaptic vesicles by glutamatergic neurons and can modulate neuronal excitability. It plays a key role in synaptic plasticity and so in learning. Zinc homeostasis also plays a critical role in the functional regulation of the central nervous system. Dysregulation of zinc homeostasis in the central nervous system that results in excessive synaptic zinc concentrations is believed" https://en.wikipedia.org/wiki/Disk%20array%20controller,"A disk array controller is a device that manages the physical disk drives and presents them to the computer as logical units. It almost always implements hardware RAID, thus it is sometimes referred to as RAID controller. It also often provides additional disk cache. Disk array controller is often improperly shortened to disk controller. The two should not be confused as they provide very different functionality. Front-end and back-end side A disk array controller provides front-end interfaces and back-end interfaces. The back-end interface communicates with the controlled disks. Hence, its protocol is usually ATA (a.k.a. PATA), SATA, SCSI, FC or SAS. The front-end interface communicates with a computer's host adapter (HBA, Host Bus Adapter) and uses: one of ATA, SATA, SCSI, FC; these are popular protocols used by disks, so by using one of them a controller may transparently emulate a disk for a computer. somewhat less popular dedicated protocols for specific solutions: FICON/ESCON, iSCSI, HyperSCSI, ATA over Ethernet or InfiniBand. A single controller may use different protocols for back-end and for front-end communication. Many enterprise controllers use FC on front-end and SATA on back-end. Enterprise controllers In a modern enterprise architecture disk array controllers (sometimes also called storage processors, or SPs) are parts of physically independent enclosures, such as disk arrays placed in a storage area network (SAN) or network-attached storage (NAS) servers. Those external disk arrays are usually purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, and management software. It is up to controllers to provide advanced functionality (various vendors name these differently): Automatic failover to another controller (transparent to computers transmitting data) Long-running operations performed without downtime Forming a new RAID set Reconstructing degraded RAID set (after a disk failure) Adding a disk to onl" https://en.wikipedia.org/wiki/Phototropism,"In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light. Phototropism is one of the many plant tropisms, or movements, which respond to external stimuli. Growth towards a light source is called positive phototropism, while growth away from light is called negative phototropism. Negative phototropism is not to be confused with skototropism, which is defined as the growth towards darkness, whereas negative phototropism can refer to either the growth away from a light source or towards the darkness. Most plant shoots exhibit positive phototropism, and rearrange their chloroplasts in the leaves to maximize photosynthetic energy and promote growth. Some vine shoot tips exhibit negative phototropism, which allows them to grow towards dark, solid objects and climb them. The combination of phototropism and gravitropism allow plants to grow in the correct direction. Mechanism There are several signaling molecules that help the plant determine where the light source is coming from, and these activate several genes, which change the hormone gradients allowing the plant to grow towards the light. The very tip of the plant is known as the coleoptile, which is necessary in light sensing. The middle portion of the coleoptile is the area where the shoot curvature occurs. The Cholodny–Went hypothesis, developed in the early 20th century, predicts that in the presence of asymmetric light, auxin will move towards the shaded side and promote elongation of the cells on that side to cause the plant to curve towards the light source. Auxins activate proton pumps, decreasing the pH in the cells on the dark side of the plant. This acidification of the cell" https://en.wikipedia.org/wiki/Packet%20concatenation,"Packet concatenation is a computer networking optimization that coalesces multiple packets under a single header. The use of packet containment reduces the overhead at the physical and link layers. See also Frame aggregation Packet aggregation" https://en.wikipedia.org/wiki/3Com%203c509,"3Com 3c509 is a line of Ethernet IEEE 802.3 network cards for the ISA, EISA, MCA and PCMCIA computer buses. It was designed by 3Com, and put on the market in 1994. Features The 3Com 3c5x9 family of network controllers has various interface combinations of computer bus including ISA, EISA, MCA and PCMCIA. For network connection, 10BASE-2, AUI and 10BASE-T are used. B = On ISA and PCMCIA adapter numbers indicates that these adapters are part of the second generation of the Parallel Tasking EtherLink III technology. The DIP-28 (U1) EPROM for network booting may be 8, 16 or 32 kByte size. This means EPROMs of type 64, 128, 256 kbit (2^10) are compatible, like the 27C256. Boot ROM address is located between 0xC0000 - 0xDE000. Teardown example, the 3c509B-Combo The Etherlink III 3C509B-Combo is registered with the FCC ID DF63C509B. The main components on the card is Y1: crystal oscillator 20 MHz, U50: coaxial transceiver interface DP8392, U4: main controller 3Com 9513S (or 9545S etc.), U6: 70 ns CMOS static RAM, U1: DIP-28 27C256 style EPROM for boot code, U3: 1024 bit 5V CMOS Serial EEPROM (configuration). Label: Etherlink III (C) 1994 3C509B-C ALL RIGHTS RESERVED ASSY 03-0021-001 REV-A FCC ID: DF63C509B Barcode: EA=0020AFDCC34C SN=6AHDCC34C MADE IN U.S.A. R = Resistor C = Capacitor L = Inductance Q = Transistor CR = Transistor FL = Transformer T = Transformer U = Integrated circuit J = Jumper or connector VR F FL70: Pulse transformer bel9509 A 0556-3873-03 * HIPOTTED Y1: 20 MHz crystal 20.000M 652DA U50: P9512BR DP8392CN Coaxial Transceiver Interface T50: Pulse transformer, pinout: 2x8 VALOR ST7033 x00: Pulse transformer VALOR PT0018 CHINA M 9449 C U4: Plastic package 33x33 pins Parallel Tasking TM 3Com 40-0130-002 9513S 22050553 AT&T 40-01302 Another chip with the same function: 40-0130-003 9545S 48324401 AT&T 40-01303 U6: 8192 x 8-bit 70 ns CMOS static RAM HY 6264A LJ-70 9509B KOR" https://en.wikipedia.org/wiki/3D%20Content%20Retrieval,"A 3D Content Retrieval system is a computer system for browsing, searching and retrieving three dimensional digital contents (e.g.: Computer-aided design, molecular biology models, and cultural heritage 3D scenes, etc.) from a large database of digital images. The most original way of doing 3D content retrieval uses methods to add description text to 3D content files such as the content file name, link text, and the web page title so that related 3D content can be found through text retrieval. Because of the inefficiency of manually annotating 3D files, researchers have investigated ways to automate the annotation process and provide a unified standard to create text descriptions for 3D contents. Moreover, the increase in 3D content has demanded and inspired more advanced ways to retrieve 3D information. Thus, shape matching methods for 3D content retrieval have become popular. Shape matching retrieval is based on techniques that compare and contrast similarities between 3D models. 3D retrieval methods Derive a high level description (e.g.: a skeleton) and then find matching results This method describes 3D models by using a skeleton. The skeleton encodes the geometric and topological information in the form of a skeletal graph and uses graph matching techniques to match the skeletons and compare them. However, this method requires a 2-manifold input model, and it is very sensitive to noise and details. Many of the existing 3D models are created for visualization purposes, while missing the input quality standard for the skeleton method. The skeleton 3D retrieval method needs more time and effort before it can be used widely. Compute a feature vector based on statistics Unlike Skeleton modeling, which requires a high quality standard for the input source, statistical methods do not put restriction on the validity of an input source. Shape histograms, feature vectors composed of global geo-metic properties such as circularity and eccentricity, and feature vector" https://en.wikipedia.org/wiki/Synchronous%20Data%20Flow,"Synchronous Data Flow (SDF) is a restriction on Kahn process networks where the number of tokens read and written by each process is known ahead of time. In some cases, processes can be scheduled such that channels have bounded FIFOs. Limitations SDF does not account for asynchronous processes as their token read/write rates will vary. Practically, one can divide the network into synchronous sub-networks connected by asynchronous links. Alternatively a runtime supervisor can enforce fairness and other desired properties. Applications SDF is useful for modeling digital signal processing (DSP) routines. Models can be compiled to target parallel hardware like FPGAs, processors with DSP instruction sets like Qualcomm's Hexagon, and other systems. See also Kahn process networks Petri net Dataflow architecture" https://en.wikipedia.org/wiki/Digital%20down%20converter,"In digital signal processing, a digital down-converter (DDC) converts a digitized, band-limited signal to a lower frequency signal at a lower sampling rate in order to simplify the subsequent radio stages. The process can preserve all the information in the frequency band of interest of the original signal. The input and output signals can be real or complex samples. Often the DDC converts from the raw radio frequency or intermediate frequency down to a complex baseband signal. Architecture A DDC consists of three subcomponents: a direct digital synthesizer (DDS), a low-pass filter (LPF), and a downsampler (which may be integrated into the low-pass filter). The DDS generates a complex sinusoid at the intermediate frequency (IF). Multiplication of the intermediate frequency with the input signal creates images centered at the sum and difference frequency (which follows from the frequency shifting properties of the Fourier transform). The lowpass filters pass the difference (i.e. baseband) frequency while rejecting the sum frequency image, resulting in a complex baseband representation of the original signal. Assuming judicious choice of IF and LPF bandwidth, the complex baseband signal is mathematically equivalent to the original signal. In its new form, it can readily be downsampled and is more convenient to many DSP algorithms. Any suitable low-pass filter can be used including FIR, IIR and CIC filters. The most common choice is a FIR filter for low amounts of decimation (less than ten) or a CIC filter followed by a FIR filter for larger downsampling ratios. Variations on the DDC Several variations on the DDC are useful, including many that input a feedback signal into the DDS. These include: Decision directed carrier recovery phase locked loops in which the I and Q are compared to the nearest ideal constellation point of a PSK signal, and the resulting error signal is filtered and fed back into the DDS A Costas loop in which the I and Q are multiplied" https://en.wikipedia.org/wiki/Undefined%20%28mathematics%29,"In mathematics, the term undefined is often used to refer to an expression which is not assigned an interpretation or a value (such as an indeterminate form, which has the possibility of assuming different values). The term can take on several different meanings depending on the context. For example: In various branches of mathematics, certain concepts are introduced as primitive notions (e.g., the terms ""point"", ""line"" and ""plane"" in geometry). As these terms are not defined in terms of other concepts, they may be referred to as ""undefined terms"". A function is said to be ""undefined"" at points outside of its domainfor example, the real-valued function is undefined for negative  (i.e., it assigns no value to negative arguments). In algebra, some arithmetic operations may not assign a meaning to certain values of its operands (e.g., division by zero). In which case, the expressions involving such operands are termed ""undefined"". In square roots, square roots of any negative number are undefined because you can’t multiply 2 of the same positive nor negative number to get a negative number, like √-4, √-9, √-16 etc. (ex: 6x6=36 and -6x-6=36). Undefined terms In ancient times, geometers attempted to define every term. For example, Euclid defined a point as ""that which has no part"". In modern times, mathematicians recognize that attempting to define every word inevitably leads to circular definitions, and therefore leave some terms (such as ""point"") undefined (see primitive notion for more). This more abstract approach allows for fruitful generalizations. In topology, a topological space may be defined as a set of points endowed with certain properties, but in the general setting, the nature of these ""points"" is left entirely undefined. Likewise, in category theory, a category consists of ""objects"" and ""arrows"", which are again primitive, undefined terms. This allows such abstract mathematical theories to be applied to very diverse concrete situations. In arithme" https://en.wikipedia.org/wiki/The%20Aleph%20%28short%20story%29,"""The Aleph"" (original Spanish title: ""El Aleph"") is a short story by the Argentine writer and poet Jorge Luis Borges. First published in September 1945, it was reprinted in the short story collection, The Aleph and Other Stories, in 1949, and revised by the author in 1974. Plot summary In Borges' story, the Aleph is a point in space that contains all other points. Anyone who gazes into it can see everything in the universe from every angle simultaneously, without distortion, overlapping, or confusion. The story traces the theme of infinity found in several of Borges' other works, such as ""The Book of Sand"". Borges has stated that the inspiration for this story came from H.G. Wells's short story ""The Door in the Wall"". As in many of Borges' short stories, the protagonist is a fictionalized version of the author. At the beginning of the story, he is mourning the recent death of Beatriz Viterbo, a woman he loved, and he resolves to stop by the house of her family to pay his respects. Over time, he comes to know her first cousin, Carlos Argentino Daneri, a mediocre poet with a vastly exaggerated view of his own talent who has made it his lifelong quest to write an epic poem that describes every single location on the planet in excruciatingly fine detail. Later in the story, a business attempts to tear down Daneri's house in the course of its expansion. Daneri becomes enraged, explaining to the narrator that he must keep the house in order to finish his poem, because the cellar contains an Aleph which he is using to write the poem. Though by now he believes Daneri to be insane, the narrator proposes to come to the house and see the Aleph for himself. Left alone in the darkness of the cellar, the narrator begins to fear that Daneri is conspiring to kill him, and then he sees the Aleph for himself: Though staggered by the experience of seeing the Aleph, the narrator pretends to have seen nothing in order to get revenge on Daneri, whom he dislikes, by giving Daneri " https://en.wikipedia.org/wiki/Hilbert%20spectral%20analysis,"Hilbert spectral analysis is a signal analysis method applying the Hilbert transform to compute the instantaneous frequency of signals according to After performing the Hilbert transform on each signal, we can express the data in the following form: This equation gives both the amplitude and the frequency of each component as functions of time. It also enables us to represent the amplitude and the instantaneous frequency as functions of time in a three-dimensional plot, in which the amplitude can be contoured on the frequency-time plane. This frequency-time distribution of the amplitude is designated as the Hilbert amplitude spectrum, or simply Hilbert spectrum. Hilbert spectral analysis method is an important part of Hilbert–Huang transform." https://en.wikipedia.org/wiki/Avalon%20explosion,"The Avalon explosion, named from the Precambrian fauna discovered at the Avalon Peninsula in Newfoundland, is a proposed evolutionary radiation of prehistoric animals about 575 million years ago in the Ediacaran Period, with the Avalon explosion being one of three eras grouped in this time. This event is believed to have occurred some 33 million years earlier than the Cambrian explosion. Scientists are still unsure of the full extent behind the development of the Avalon explosion. The Avalon explosion resulted in a rapid increase in organism diversity. Many of the animals from the Avalon explosion are found living in deep marine environments. The first stages of the Avalon explosion were observed through comparatively minimal species. History Charles Darwin predicted a time of ecological growth before the Cambrian Period, but there was no evidence to support it until the Avalon explosion was proposed in 2008 by Virginia Tech paleontologists after analysis of the morphological space change in several Ediacaran assemblages. The discovery suggests that the early evolution of animals may have involved more than one explosive event. The original analysis has been the subject of dispute in the literature. Evidence Trace fossils of these Avalon organisms have been found worldwide, with many found in Newfoundland, in Canada and the Charnwood Forest in England, representing the earliest known complex multicellular organisms. The Avalon explosion theoretically produced the Ediacaran biota. The biota largely disappeared contemporaneously with the rapid increase in biodiversity known as the Cambrian explosion. At this time, all living animal groups were present in the Cambrian oceans. The Avalon explosion appears similar to the Cambrian explosion in the rapid increase in diversity of morphologies in a relatively small-time frame, followed by diversification within the established body plans, a pattern similar to that observed in other evolutionary events. Plants and animals" https://en.wikipedia.org/wiki/Stanton%20number,"The Stanton number, St, is a dimensionless number that measures the ratio of heat transferred into a fluid to the thermal capacity of fluid. The Stanton number is named after Thomas Stanton (engineer) (1865–1931). It is used to characterize heat transfer in forced convection flows. Formula where h = convection heat transfer coefficient ρ = density of the fluid cp = specific heat of the fluid u = velocity of the fluid It can also be represented in terms of the fluid's Nusselt, Reynolds, and Prandtl numbers: where Nu is the Nusselt number; Re is the Reynolds number; Pr is the Prandtl number. The Stanton number arises in the consideration of the geometric similarity of the momentum boundary layer and the thermal boundary layer, where it can be used to express a relationship between the shear force at the wall (due to viscous drag) and the total heat transfer at the wall (due to thermal diffusivity). Mass transfer Using the heat-mass transfer analogy, a mass transfer St equivalent can be found using the Sherwood number and Schmidt number in place of the Nusselt number and Prandtl number, respectively. where is the mass Stanton number; is the Sherwood number based on length; is the Reynolds number based on length; is the Schmidt number; is defined based on a concentration difference (kg s−1 m−2); is the velocity of the fluid Boundary layer flow The Stanton number is a useful measure of the rate of change of the thermal energy deficit (or excess) in the boundary layer due to heat transfer from a planar surface. If the enthalpy thickness is defined as: Then the Stanton number is equivalent to for boundary layer flow over a flat plate with a constant surface temperature and properties. Correlations using Reynolds-Colburn analogy Using the Reynolds-Colburn analogy for turbulent flow with a thermal log and viscous sub layer model, the following correlation for turbulent heat transfer for is applicable where See also Strouhal number, an unrelated nu" https://en.wikipedia.org/wiki/Floppy-disk%20controller,"A floppy-disk controller (FDC) has evolved from a discrete set of components on one or more circuit boards to a special-purpose integrated circuit (IC or ""chip"") or a component thereof. An FDC directs and controls reading from and writing to a computer's floppy disk drive (FDD). The FDC is responsible for reading data presented from the host computer and converting it to the drive's on-disk format using one of a number of encoding schemes, like FM encoding (single density) or MFM encoding (double density), and reading those formats and returning it to its original binary values. Depending on the platform, data transfers between the controller and host computer would be controlled by the computer's own microprocessor, or an inexpensive dedicated microprocessor like the MOS 6507 or Zilog Z80. Early controllers required additional circuitry to perform specific tasks like providing clock signals and setting various options. Later designs included more of this functionality on the controller and reduced the complexity of the external circuitry; single-chip solutions were common by the later 1980s. By the 1990s, the floppy disk was increasingly giving way to hard drives, which required similar controllers. In these systems, the controller also often combined a microcontroller to handle data transfer over standardized connectors like SCSI and IDE that could be used with any computer. In more modern systems, the FDC, if present at all, is typically part of the many functions provided by a single super I/O chip. History The first floppy disk drive controller (FDC) like the first floppy disk drive (the IBM 23FD) shipped in 1971 as a component in the IBM 2385 Storage Control Unit for the IBM 2305 fixed head disk drive, and of the System 370 Models 155 and 165. The IBM 3830 Storage Control Unit, a contemporaneous and quite similar controller, uses its internal processor to control a 23FD. The resultant FDC is a simple implementation in IBMs’ MST hybrid circuits on a few pr" https://en.wikipedia.org/wiki/Pathovar,"A pathovar is a bacterial strain or set of strains with the same or similar characteristics, that is differentiated at infrasubspecific level from other strains of the same species or subspecies on the basis of distinctive pathogenicity to one or more plant hosts. Pathovars are named as a ternary or quaternary addition to the species binomial name, for example the bacterium that causes citrus canker Xanthomonas axonopodis, has several pathovars with different host ranges, X. axonopodis pv. citri is one of them; the abbreviation 'pv.' means pathovar. The type strains of pathovars are pathotypes, which are distinguished from the types (holotype, neotype, etc.) of the species to which the pathovar belongs. See also Infraspecific names in botany Phytopathology Trinomen, infraspecific names in zoology (subspecies only)" https://en.wikipedia.org/wiki/Laser%20capture%20microdissection,"Laser capture microdissection (LCM), also called microdissection, laser microdissection (LMD), or laser-assisted microdissection (LMD or LAM), is a method for isolating specific cells of interest from microscopic regions of tissue/cells/organisms (dissection on a microscopic scale with the help of a laser). Principle Laser-capture microdissection (LCM) is a method to procure subpopulations of tissue cells under direct microscopic visualization. LCM technology can harvest the cells of interest directly or can isolate specific cells by cutting away unwanted cells to give histologically pure enriched cell populations. A variety of downstream applications exist: DNA genotyping and loss of heterozygosity (LOH) analysis, RNA transcript profiling, cDNA library generation, proteomics discovery and signal-pathway profiling. The total time required to carry out this protocol is typically 1–1.5 h. Extraction A laser is coupled into a microscope and focuses onto the tissue on the slide. By movement of the laser by optics or the stage the focus follows a trajectory which is predefined by the user. This trajectory, also called element, is then cut out and separated from the adjacent tissue. After the cutting process, an extraction process has to follow if an extraction process is desired. More recent technologies utilize non-contact microdissection. There are several ways to extract tissue from a microscope slide with a histopathology sample on it. Press a sticky surface onto the sample and tear out. This extracts the desired region, but can also remove particles or unwanted tissue on the surface, because the surface is not selective. Melt a plastic membrane onto the sample and tear out. The heat is introduced, for example, by a red or infrared (IR) laser onto a membrane stained with an absorbing dye. As this adheres the desired sample onto the membrane, as with any membrane that is put close to the histopathology sample surface, there might be some debris extracted. Another " https://en.wikipedia.org/wiki/Institute%20of%20Electronics%2C%20Information%20and%20Communication%20Engineers,"The is a Japanese institute specializing in the areas of electronic, information and communication engineering and associated fields. Its headquarters are located in Tokyo, Japan. It is a membership organization with the purpose of advancing the field of electronics, information and communications and support activities of its members. History The earliest predecessor to the organization was formed in May 1911 as the Second Study Group of the Second Department of the Japanese Ministry of Communications Electric Laboratory. In March 1914 the Second Study Group was renamed the Study Group on Telegraph and Telephone. As the adoption of the telegraph and telephone quickly mounted, there was increased demand for research and development of these technologies, which prompted the need to create a dedicated institute for engineers working in this field. Thus the Institute of Telegraph and Telephone Engineers of Japan was established in May 1917. Soon after its formation the institute began to publish journals and host paper presentations showcasing the latest developments in the field. As the institute's scope of research broadened to accommodate new technical developments, it was rebranded as the Institute of Electrical Communication Engineers of Japan in January 1937, and then once again as the Institute of Electronics and Communication Engineers of Japan in May 1967. Finally, in January 1987, the institute renamed itself to the Institute of Electronics, Information and Communication Engineers to recognize the increasing research being conducted in computer engineering and information technology. Organization The institution is organized into five societies: electronics society communications society information and system society engineering sciences society human communication engineering society Each society has its own president and technical committees. Volunteers helped run various activities within the society, such as publications and conferences. Mem" https://en.wikipedia.org/wiki/Reagent,"In chemistry, a reagent ( ) or analytical reagent is a substance or compound added to a system to cause a chemical reaction, or test if one occurs. The terms reactant and reagent are often used interchangeably, but reactant specifies a substance consumed in the course of a chemical reaction. Solvents, though involved in the reaction mechanism, are usually not called reactants. Similarly, catalysts are not consumed by the reaction, so they are not reactants. In biochemistry, especially in connection with enzyme-catalyzed reactions, the reactants are commonly called substrates. Definitions Organic chemistry In organic chemistry, the term ""reagent"" denotes a chemical ingredient (a compound or mixture, typically of inorganic or small organic molecules) introduced to cause the desired transformation of an organic substance. Examples include the Collins reagent, Fenton's reagent, and Grignard reagents. Analytical chemistry In analytical chemistry, a reagent is a compound or mixture used to detect the presence or absence of another substance, e.g. by a color change, or to measure the concentration of a substance, e.g. by colorimetry. Examples include Fehling's reagent, Millon's reagent, and Tollens' reagent. Commercial or laboratory preparations In commercial or laboratory preparations, reagent-grade designates chemical substances meeting standards of purity that ensure the scientific precision and reliability of chemical analysis, chemical reactions or physical testing. Purity standards for reagents are set by organizations such as ASTM International or the American Chemical Society. For instance, reagent-quality water must have very low levels of impurities such as sodium and chloride ions, silica, and bacteria, as well as a very high electrical resistivity. Laboratory products which are less pure, but still useful and economical for undemanding work, may be designated as technical, practical, or crude grade to distinguish them from reagent versions. Biology In t" https://en.wikipedia.org/wiki/Observability%20%28software%29,"In distributed systems, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components. To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage. One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue. Etymology, terminology and definition The term is borrowed from control theory, where the ""observability"" of a system measures how well its state can be determined from its outputs. Similarly, software observability measures how well a system's state can be understood from the obtained telemetry (metrics, logs, traces, profiling). The definition of observability varies by vendor: The term is frequently referred to as its numeronym O11y (where 11 stands for the number of letters between the first letter and the last letter of the word). This is similar to other computer science abbreviations such as i18n and L10n and k8s. Observability vs. monitoring Observability and monitoring are sometimes used interchangeably. As tooling, commercial offerings and practices evolved in complexity, ""monitoring"" was re-branded as observability in order to differentiate new tools from the old. The terms are commonly contrasted in that systems are monitored using predefined sets of telemetry, and monitored systems may be observable. Majors et al. suggest that engineering teams that only have monitoring tools end up relying on expert foreknowledge (seniority), whereas teams that have observability tools rely on exploratory analysis (curiosity). Telemetry types Observability relies on three main types of telemetry data: metrics, logs and traces. Those are often referred to as ""pillars of observability"". Metrics A metric is a point in tim" https://en.wikipedia.org/wiki/Chirp%20compression,"The chirp pulse compression process transforms a long duration frequency-coded pulse into a narrow pulse of greatly increased amplitude. It is a technique used in radar and sonar systems because it is a method whereby a narrow pulse with high peak power can be derived from a long duration pulse with low peak power. Furthermore, the process offers good range resolution because the half-power beam width of the compressed pulse is consistent with the system bandwidth. The basics of the method for radar applications were developed in the late 1940s and early 1950s, but it was not until 1960, following declassification of the subject matter, that a detailed article on the topic appeared the public domain. Thereafter, the number of published articles grew quickly, as demonstrated by the comprehensive selection of papers to be found in a compilation by Barton. Briefly, the basic pulse compression properties can be related as follows. For a chirp waveform that sweeps over a frequency range F1 to F2 in a time period T, the nominal bandwidth of the pulse is B, where B = F2 – F1, and the pulse has a time-bandwidth product of T×B. Following pulse compression, a narrow pulse of duration τ is obtained, where τ ≈ 1/B, together with a peak voltage amplification of . The chirp compression process – outline In order to compress a chirp pulse of duration T seconds, which sweeps linearly in frequency from F1 Hz to F2 Hz, a device with the characteristics of a dispersive delay line is required. This provides most delay for the frequency F1, the first to be generated, but with a delay which reduces linearly with frequency, to be T second less at the end frequency F2. Such a delay characteristic ensures that all frequency components of the chirp pass through the device, to arrive at the detector at the same time instant and so augment one another, to produce a narrow high amplitude pulse, as shown in the figure: An expression describing the required delay characteristic is This has " https://en.wikipedia.org/wiki/IEC%2061162,"IEC 61162 is a collection of IEC standards for ""Digital interfaces for navigational equipment within a ship"". The 61162 standards are developed in Working Group 6 (WG6) of Technical Committee 80 (TC80) of the IEC. Sections of IEC 61162 Standard IEC 61162 is divided into the following parts: Part 1: Single talker and multiple listeners (Also known as NMEA 0183) Part 2: Single talker and multiple listeners, high-speed transmission Part 3: Serial data instrument network (Also known as NMEA 2000) Part 450: Multiple talkers and multiple listeners–Ethernet interconnection (Also known as Lightweight Ethernet) Part 460: Multiple talkers and multiple listeners - Ethernet interconnection - Safety and security The 61162 standards are all concerning the transport of NMEA sentences, but the IEC does not define any of these. This is left to the NMEA Organization. IEC 61162-1 Single talker and multiple listeners. IEC 61162-2 Single talker and multiple listeners, high-speed transmission. IEC 61162-3 Serial data instrument network, multiple talker-multiple listener, prioritized data. IEC 61162-450 Multiple talkers and multiple listeners. This subgroup of TC80/WG6 has specified the use of Ethernet for shipboard navigational networks. The specification describes the transport of NMEA sentences as defined in 61162-1 over IPv4. Due to the low amount of protocol complexity it has been nicknamed Lightweight Ethernet or LWE in short. The historical background and justification for LWE was presented at the ISIS2011 symposium. An overview article of LWE was given in the December 2010 issue of ""Digital Ship"" The standard was published in the first edition in June 2011. The second edition is in progress (state May 2016). IEC 61162-450/460 IEC 61162-460:2015(E) is an add-on to the IEC 61162-450 standard where higher safety and security standards are needed, e.g. due to higher exposure to external threats or to improve network integrity. This standard provides requirements an" https://en.wikipedia.org/wiki/Temperature-sensitive%20mutant,"Temperature-sensitive mutants are variants of genes that allow normal function of the organism at low temperatures, but altered function at higher temperatures. Cold sensitive mutants are variants of genes that allow normal function of the organism at higher temperatures, but altered function at low temperatures. Mechanism Most temperature-sensitive mutations affect proteins, and cause loss of protein function at the non-permissive temperature. The permissive temperature is one at which the protein typically can fold properly, or remain properly folded. At higher temperatures, the protein is unstable and ceases to function properly. These mutations are usually recessive in diploid organisms. Temperature sensitive mutants arrange a reversible mechanism and are able to reduce particular gene products at varying stages of growth and are easily done by changing the temperature of growth. Permissive temperature The permissive temperature is the temperature at which a temperature-sensitive mutant gene product takes on a normal, functional phenotype. When a temperature-sensitive mutant is grown in a permissive condition, the mutated gene product behaves normally (meaning that the phenotype is not observed), even if there is a mutant allele present. This results in the survival of the cell or organism, as if it were a wild type strain. In contrast, the nonpermissive temperature or restrictive temperature is the temperature at which the mutant phenotype is observed. Temperature sensitive mutations are usually missense mutations, which then will harbor the function of a specified necessary gene at the standard, permissive, low temperature. It will alternatively lack the function at a rather high, non-permissive, temperature and display a hypomorphic (partial loss of gene function) and a middle, semi-permissive, temperature. Use in research Temperature-sensitive mutants are useful in biological research. They allow the study of essential processes required for the surviv" https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl%20transform,"In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture. Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.) Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function. In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect ""classical"" with ""quantum"" quantities. For example, the phase-space function may depend explicitly on Planck's constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then all" https://en.wikipedia.org/wiki/Migration%20%28virtualization%29,"In the context of virtualization, where a guest simulation of an entire computer is actually merely a software virtual machine (VM) running on a host computer under a hypervisor, migration (also known as teleportation, also known as live migration) is the process by which a running virtual machine is moved from one physical host to another, with little or no disruption in service. Subjective effects Ideally, the process is completely transparent, resulting in no disruption of service (or downtime). In practice, there is always some minor pause in availability, though it may be low enough that only hard real-time systems are affected. Virtualization is far more frequently used with network services and user applications, and these can generally tolerate the brief delays which may be involved. The perceived impact, if any, is similar to a longer-than-usual kernel delay. Objective effects The actual process is heavily dependent on the particular virtualization package in use, but in general, the process is as follows: Regular snapshots of the VM (its simulated hard disk storage, its memory, and its virtual peripherals) are taken in the background by the hypervisor, or by a set of administrative scripts. Each new snapshot adds a differential overlay file to the top of a stack that, as a whole, fully describes the machine. Only the topmost overlay can be written to. Since the older overlays are read-only, they are safe to copy to another machine—the backup host. This is done at regular intervals, and each overlay need only be copied once. When a migration operation is requested, the virtual machine is paused, and its current state is saved to disk. These new, final overlay files are transferred to the backup host. Since this new current state consists only of changes made since the last backup synchronization, for many applications there is very little to transfer, and this happens very quickly. The hypervisor on the new host resumes the guest virtual machine. Id" https://en.wikipedia.org/wiki/Undervoltage-lockout,"The undervoltage-lockout (UVLO) is an electronic circuit used to turn off the power of an electronic device in the event of the voltage dropping below the operational value that could cause unpredictable system behavior. For instance, in battery powered embedded devices, UVLOs can be used to monitor the battery voltage and turn off the embedded device's circuit if the battery voltage drops below a specific threshold, thus protecting the associated equipment. Some variants may also have unique values for power-up (positive-going) and power-down (negative-going) thresholds. Usages Typical usages include: Electrical ballast circuits to switch them off in the event of voltage falling below the operational value. Switched-mode power supplies. When the system supply output impedance is higher than the input impedance of the regulator, an UVLO with a higher hysteresis should be used to prevent oscillations before settling down to a steady state and possible malfunctions of the regulator. See also No-volt release" https://en.wikipedia.org/wiki/Cytometry,"Cytometry is the measurement of number and characteristics of cells. Variables that can be measured by cytometric methods include cell size, cell count, cell morphology (shape and structure), cell cycle phase, DNA content, and the existence or absence of specific proteins on the cell surface or in the cytoplasm. Cytometry is used to characterize and count blood cells in common blood tests such as the complete blood count. In a similar fashion, cytometry is also used in cell biology research and in medical diagnostics to characterize cells in a wide range of applications associated with diseases such as cancer and AIDS. Cytometric devices Image cytometers Image cytometry is the oldest form of cytometry. Image cytometers operate by statically imaging a large number of cells using optical microscopy. Prior to analysis, cells are commonly stained to enhance contrast or to detect specific molecules by labeling these with fluorochromes. Traditionally, cells are viewed within a hemocytometer to aid manual counting. Since the introduction of the digital camera, in the mid-1990s, the automation level of image cytometers has steadily increased. This has led to the commercial availability of automated image cytometers, ranging from simple cell counters to sophisticated high-content screening systems. Flow cytometers Due to the early difficulties of automating microscopy, the flow cytometer has since the mid-1950s been the dominating cytometric device. Flow cytometers operate by aligning single cells using flow techniques. The cells are characterized optically or by the use of an electrical impedance method called the Coulter principle. To detect specific molecules when optically characterized, cells are in most cases stained with the same type of fluorochromes that are used by image cytometers. Flow cytometers generally provide less data than image cytometers, but have a significantly higher throughput. Cell sorters Cell sorters are flow cytometers capable of sorting ce" https://en.wikipedia.org/wiki/Hardware%20security,"Hardware security is a discipline originated from the cryptographic engineering and involves hardware design, access control, secure multi-party computation, secure key storage, ensuring code authenticity, measures to ensure that the supply chain that built the product is secure among other things. A hardware security module (HSM) is a physical computing device that safeguards and manages digital keys for strong authentication and provides cryptoprocessing. These modules traditionally come in the form of a plug-in card or an external device that attaches directly to a computer or network server. Some providers in this discipline consider that the key difference between hardware security and software security is that hardware security is implemented using ""non-Turing-machine"" logic (raw combinatorial logic or simple state machines). One approach, referred to as ""hardsec"", uses FPGAs to implement non-Turing-machine security controls as a way of combining the security of hardware with the flexibility of software. Hardware backdoors are backdoors in hardware. Conceptionally related, a hardware Trojan (HT) is a malicious modification of electronic system, particularly in the context of integrated circuit. A physical unclonable function (PUF) is a physical entity that is embodied in a physical structure and is easy to evaluate but hard to predict. Further, an individual PUF device must be easy to make but practically impossible to duplicate, even given the exact manufacturing process that produced it. In this respect it is the hardware analog of a one-way function. The name ""physical unclonable function"" might be a little misleading as some PUFs are clonable, and most PUFs are noisy and therefore do not achieve the requirements for a function. Today, PUFs are usually implemented in integrated circuits and are typically used in applications with high security requirements. Many attacks on sensitive data and resources reported by organizations occur from within the org" https://en.wikipedia.org/wiki/Cope%27s%20rule,"Cope's rule, named after American paleontologist Edward Drinker Cope, postulates that population lineages tend to increase in body size over evolutionary time. It was never actually stated by Cope, although he favoured the occurrence of linear evolutionary trends. It is sometimes also known as the Cope–Depéret rule, because Charles Depéret explicitly advocated the idea. Theodor Eimer had also done so earlier. The term ""Cope's rule"" was apparently coined by Bernhard Rensch, based on the fact that Depéret had ""lionized Cope"" in his book. While the rule has been demonstrated in many instances, it does not hold true at all taxonomic levels, or in all clades. Larger body size is associated with increased fitness for a number of reasons, although there are also some disadvantages both on an individual and on a clade level: clades comprising larger individuals are more prone to extinction, which may act to limit the maximum size of organisms. Function Effects of growth Directional selection appears to act on organisms' size, whereas it exhibits a far smaller effect on other morphological traits, though it is possible that this perception may be a result of sample bias. This selectional pressure can be explained by a number of advantages, both in terms of mating success and survival rate. For example, larger organisms find it easier to avoid or fight off predators and capture prey, to reproduce, to kill competitors, to survive temporary lean times, and to resist rapid climatic changes. They may also potentially benefit from better thermal efficiency, increased intelligence, and a longer lifespan. Offsetting these advantages, larger organisms require more food and water, and shift from r to K-selection. Their longer generation time means a longer period of reliance on the mother, and on a macroevolutionary scale restricts the clade's ability to evolve rapidly in response to changing environments. Capping growth Left unfettered, the trend of ever-larger size would produc" https://en.wikipedia.org/wiki/Quality%20intellectual%20property%20metric,"The quality intellectual property metric (QIP) is an international standard, developed by Virtual Socket Interface Alliance (VSIA) for measuring Intellectual Property (IP) or Silicon intellectual property (SIP) quality and examining the practices used to design, integrate and support the SIP. SIP hardening is required to facilitate the reuse of IP in integrated circuit design. Background and importance Many computer processors use a system-on-a-chip (SoC) design, which is intended to include all of a device's functions on a single chip. As a result, these chips need to include numerous technical standards that the device will use. One solution to designing such a chip is the reuse of high quality IP. Reusing IP from others means that the chip designer does not need to redesign these elements. IP quality is the key to successful SoC designs, but it is one of the SoC’s most challenging problems. QIP metric allows both the IP designers and IP integrators to measure the quality of an IP core against a checklist of critical issues. IP integrators make use of the IP cores into their own design and deliver final integrated circuit for an application, e.g. an integrated circuit designer of iPhone main processor IC (ARM architecture CPU) integrates other IP cores like USB 2.0, DSP, MP4 decoder, etc., so that the additional features of USB 2.0, MP4 decoder, etc. can be easily embedded into the final IC. The QIP typically consists of interactive Microsoft Excel spreadsheets with sets of questions to be answered by the IP vendor. SIP quality measure framework Hong Kong Science and Technology Parks Corporation (HKSTP) and Hong Kong University of Science and Technology (HKUST) started to develop a SIP verification and quality measures framework in 2005, based on QIP metric. The objective is to develop a technical framework for SIP quality measures and evaluation based on QIP. Third-party SIP evaluation service is provided by HKSTP, so that IP integrators can know the qua" https://en.wikipedia.org/wiki/Solovay%E2%80%93Kitaev%20theorem,"In quantum information and computation, the Solovay–Kitaev theorem says, roughly, that if a set of single-qubit quantum gates generates a dense subset of SU(2), then that set can be used to approximate any desired quantum gate with a relatively short sequence of gates. This theorem is considered one of the most significant results in the field of quantum computation and was first announced by Robert M. Solovay in 1995 and independently proven by Alexei Kitaev in 1997. Michael Nielsen and Christopher M. Dawson have noted its importance in the field. A consequence of this theorem is that a quantum circuit of constant-qubit gates can be approximated to error (in operator norm) by a quantum circuit of gates from a desired finite universal gate set. By comparison, just knowing that a gate set is universal only implies that constant-qubit gates can be approximated by a finite circuit from the gate set, with no bound on its length. So, the Solovay–Kitaev theorem shows that this approximation can be made surprisingly efficient, thereby justifying that quantum computers need only implement a finite number of gates to gain the full power of quantum computation. Statement Let be a finite set of elements in SU(2) containing its own inverses (so implies ) and such that the group they generate is dense in SU(2). Consider some . Then there is a constant such that for any , there is a sequence of gates from of length such that . That is, approximates to operator norm error. Quantitative bounds The constant can be made to be for any fixed . However, there exist particular gate sets for which we can take , which makes the length of the gate sequence tight up to a constant factor. Proof idea The proof of the Solovay–Kitaev theorem proceeds by recursively constructing a gate sequence giving increasingly good approximations to . Suppose we have an approximation such that . Our goal is to find a sequence of gates approximating to error, for . By concatenating thi" https://en.wikipedia.org/wiki/List%20of%20convexity%20topics,"This is a list of convexity topics, by Wikipedia page. Alpha blending - the process of combining a translucent foreground color with a background color, thereby producing a new blended color. This is a convex combination of two colors allowing for transparency effects in computer graphics. Barycentric coordinates - a coordinate system in which the location of a point of a simplex (a triangle, tetrahedron, etc.) is specified as the center of mass, or barycenter, of masses placed at its vertices. The coordinates are non-negative for points in the convex hull. Borsuk's conjecture - a conjecture about the number of pieces required to cover a body with a larger diameter. Solved by Hadwiger for the case of smooth convex bodies. Bond convexity - a measure of the non-linear relationship between price and yield duration of a bond to changes in interest rates, the second derivative of the price of the bond with respect to interest rates. A basic form of convexity in finance. Carathéodory's theorem (convex hull) - If a point x of Rd lies in the convex hull of a set P, there is a subset of P with d+1 or fewer points such that x lies in its convex hull. Choquet theory - an area of functional analysis and convex analysis concerned with measures with support on the extreme points of a convex set C. Roughly speaking, all vectors of C should appear as 'averages' of extreme points. Complex convexity — extends the notion of convexity to complex numbers. Convex analysis - the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization. Convex combination - a linear combination of points where all coefficients are non-negative and sum to 1. All convex combinations are within the convex hull of the given points. Convex and Concave - a print by Escher in which many of the structure's features can be seen as both convex shapes and concave impressions. Convex body - a compact convex set in a Euclide" https://en.wikipedia.org/wiki/Z-factor,"The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening (where it is also known as Z-prime), and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention. Background In high-throughput screens, experimenters often compare a large number (hundreds of thousands to tens of millions) of single measurements of unknown samples to positive and negative control samples. The particular choice of experimental conditions and measurements is called an assay. Large screens are expensive in time and resources. Therefore, prior to starting a large screen, smaller test (or pilot) screens are used to assess the quality of an assay, in an attempt to predict if it would be useful in a high-throughput setting. The Z-factor is an attempt to quantify the suitability of a particular assay for use in a full-scale, high-throughput screen. Definition The Z-factor is defined in terms of four parameters: the means () and standard deviations () of both the positive (p) and negative (n) controls (, , and , ). Given these values, the Z-factor is defined as: In practice, the Z-factor is estimated from the sample means and sample standard deviations Interpretation The following interpretations for the Z-factor are taken from: Note that by the standards of many types of experiments, a zero Z-factor would suggest a large effect size, rather than a borderline useless result as suggested above. For example, if σp=σn=1, then μp=6 and μn=0 gives a zero Z-factor. But for normally-distributed data with these parameters, the probability that the positive control value would be less than the negative control value is less than 1 in 105. Extreme conservatism is used in high throughput screening due to the large number of tests performed. Limitations The constant factor 3 in the definition of the Z-factor is motivated by the normal distribution, for which more than 99% of values" https://en.wikipedia.org/wiki/Board%20support%20package,"In embedded systems, a board support package (BSP) is the layer of software containing hardware-specific boot firmware and device drivers and other routines that allow a given embedded operating system, for example a real-time operating system (RTOS), to function in a given hardware environment (a motherboard), integrated with the embedded operating system. Software Third-party hardware developers who wish to support a given embedded operating system must create a BSP that allows that embedded operating system to run on their platform. In most cases, the embedded operating system image and software license, the BSP containing it, and the hardware are bundled together by the hardware vendor. BSPs are typically customizable, allowing the user to specify which drivers and routines should be included in the build based on their selection of hardware and software options. For instance, a particular single-board computer might be paired with several peripheral chips; in that case the BSP might include drivers for peripheral chips supported; when building the BSP image the user would specify which peripheral drivers to include based on their choice of hardware. Some suppliers also provide a root file system, a toolchain for building programs to run on the embedded system, and utilities to configure the device (while running) along with the BSP. Many embedded operating system providers provide template BSP's, developer assistance, and test suites to aid BSP developers to set up an embedded operating system on a new hardware platform. History The term BSP has been in use since 1981 when Hunter & Ready, the developers of the Versatile Real-Time Executive (VRTX), first coined the term to describe the hardware-dependent software needed to run VRTX on a specific hardware platform. Since the 1980s, it has been in wide use throughout the industry. Virtually all RTOS providers now use the term BSP. Example The Wind River Systems board support package for the ARM Integrator 92" https://en.wikipedia.org/wiki/Football%20Live,"Football Live was the name given to the project and computer system created and utilised by PA Sport to collect Real Time Statistics from major English & Scottish Football Matches and distribute to most leading media organisations. At the time of its operation, more than 99% of all football statistics displayed across Print, Internet, Radio & TV Media outlets would have been collected via Football Live. Background Prior to implementation of Football Live, the collection process consisted of a news reporter or press officer at each club telephoning the Press Association, relaying information on Teams, Goals and Half-Time & Full Time. The basis for Football Live was to have a representative of the Press Association (FBA - Football Analyst) at every ground. Throughout the whole match they would stay on an open line on a mobile phone to a Sports Information Processor (SIP), constantly relaying in real time statistical information for every : Shot Foul Free Kick Goal Cross Goal Kick Offside This information would be entered in real time and passed to our media customers. The Football Live project was in use from Season 2001/02 until the service was taken over by Opta in 2013/14 Commercial Customers The most famous use for the Football Live data was for the Vidiprinter services on BBC & Sky Sports, allowing goals to be viewed on TV screens within 20 seconds of the event happening. League competitions From its inception in 2001/02 season, the following leagues/competitions were fully covered by Football live English Premier League Championship League One League Two Conference Scottish Premier League English FA Cup English Football League Cup World Cup European Championships Champions League Europa League Football Analysts (FBA's) During the early development stages, the initial idea was to employee ex-referees to act as Football Analysts, but this was soon dismissed in favour of ex-professional Footballers. The most famous of which were Brendon O" https://en.wikipedia.org/wiki/Versit%20Consortium,"The versit Consortium was a multivendor initiative founded by Apple Computer, AT&T, IBM and Siemens in the early 1990s in order to create Personal Data Interchange (PDI) technology, open specifications for exchanging personal data over the Internet, wired and wireless connectivity and Computer Telephony Integration (CTI). The Consortium started a number of projects to deliver open specifications aimed at creating industry standards. Computer Telephony Integration One of the most ambitious projects of the Consortium was the Versit CTI Encyclopedia (VCTIE), a 3,000 page, 6 volume set of specifications defining how computer and telephony systems are to interact and become interoperable. The Encyclopedia was built on existing technologies and specifications such as ECMA's call control specifications, TSAPI and industry expertise of the core technical team. The volumes are: Volume 1, Concepts & Terminology Volume 2, Configurations & Landscape Volume 3, Telephony Feature Set Volume 4, Call Flow Scenarios Volume 5, CTI Protocols Volume 6, Versit TSAPI Appendices include: Versit TSAPI header file Protocol 1 ASN.1 description Protocol 2 ASN.1 description Versit Server Mapper Interface header file Versit TSDI header file The core Versit CTI Encyclopedia technical team was composed of David H. Anderson and Marcus W. Fath from IBM, Frédéric Artru and Michael Bayer from Apple Computer, James L. Knight and Steven Rummel from AT&T (then Lucent Technologies), Tom Miller from Siemens, and consultants Ellen Feaheny and Charles Hudson. Upon completion, the Versit CTI Encyclopedia was transferred to the ECTF and has been adopted in the form of ECTF C.001. This model represents the basis for the ECTF's call control efforts. Though the Versit CTI Encyclopedia ended up influencing many products, there was one full compliant implementation of the specifications that was brought to market: Odisei, a French company founded by team member Frédéric Artru developed the IntraSw" https://en.wikipedia.org/wiki/Comb%20filter,"In signal processing, a comb filter is a filter implemented by adding a delayed version of a signal to itself, causing constructive and destructive interference. The frequency response of a comb filter consists of a series of regularly spaced notches in between regularly spaced peaks (sometimes called teeth) giving the appearance of a comb. Comb filters exist in two forms, feedforward and feedback; which refer to the direction in which signals are delayed before they are added to the input. Comb filters may be implemented in discrete time or continuous time forms which are very similar. Applications Comb filters are employed in a variety of signal processing applications, including: Cascaded integrator–comb (CIC) filters, commonly used for anti-aliasing during interpolation and decimation operations that change the sample rate of a discrete-time system. 2D and 3D comb filters implemented in hardware (and occasionally software) in PAL and NTSC analog television decoders, reduce artifacts such as dot crawl. Audio signal processing, including delay, flanging, physical modelling synthesis and digital waveguide synthesis. If the delay is set to a few milliseconds, a comb filter can model the effect of acoustic standing waves in a cylindrical cavity or in a vibrating string. In astronomy the astro-comb promises to increase the precision of existing spectrographs by nearly a hundredfold. In acoustics, comb filtering can arise as an unwanted artifact. For instance, two loudspeakers playing the same signal at different distances from the listener, create a comb filtering effect on the audio. In any enclosed space, listeners hear a mixture of direct sound and reflected sound. The reflected sound takes a longer, delayed path compared to the direct sound, and a comb filter is created where the two mix at the listener. Similarly, comb filtering may result from mono mixing of multiple mics, hence the 3:1 rule of thumb that neighboring mics should be separated at least t" https://en.wikipedia.org/wiki/Electromagnetic%20field%20solver,"Electromagnetic field solvers (or sometimes just field solvers) are specialized programs that solve (a subset of) Maxwell's equations directly. They form a part of the field of electronic design automation, or EDA, and are commonly used in the design of integrated circuits and printed circuit boards. They are used when a solution from first principles or the highest accuracy is required. Introduction The extraction of parasitic circuit models is essential for various aspects of physical verification such as timing, signal integrity, substrate coupling, and power grid analysis. As circuit speeds and densities have increased, the need has grown to account accurately for parasitic effects for more extensive and more complicated interconnect structures. In addition, the electromagnetic complexity has grown as well, from resistance and capacitance to inductance, and now even full electromagnetic wave propagation. This increase in complexity has also grown for the analysis of passive devices such as integrated inductors. Electromagnetic behavior is governed by Maxwell's equations, and all parasitic extraction requires solving some form of Maxwell's equations. That form may be a simple analytic parallel plate capacitance equation or may involve a full numerical solution for a complex 3D geometry with wave propagation. In layout extraction, analytic formulas for simple or simplified geometry can be used where accuracy is less important than speed. Still, when the geometric configuration is not simple, and accuracy demands do not allow simplification, a numerical solution of the appropriate form of Maxwell's equations must be employed. The appropriate form of Maxwell's equations is typically solved by one of two classes of methods. The first uses a differential form of the governing equations and requires the discretization (meshing) of the entire domain in which the electromagnetic fields reside. Two of the most common approaches in this first class are the finite diffe" https://en.wikipedia.org/wiki/Common%20normal%20%28robotics%29,"In robotics the common normal of two non-intersecting joint axes is a line perpendicular to both axes. The common normal can be used to characterize robot arm links, by using the ""common normal distance"" and the angle between the link axes in a plane perpendicular to the common normal. When two consecutive joint axes are parallel, the common normal is not unique and an arbitrary common normal may be used, usually one that passes through the center of a coordinate system. The common normal is widely used in the representation of the frames of reference for robot joints and links, and the selection of minimal representations with the Denavit–Hartenberg parameters. See also Denavit–Hartenberg parameters Forward kinematics Robotic arm" https://en.wikipedia.org/wiki/Computer%20network%20diagram,"A computer network diagram is a schematic depicting the nodes and connections amongst nodes in a computer network or, more generally, any telecommunications network. Computer network diagrams form an important part of network documentation. Symbolization Readily identifiable icons are used to depict common network appliances, e.g. routers, and the style of lines between them indicates the type of connection. Clouds are used to represent networks external to the one pictured for the purposes of depicting connections between internal and external devices, without indicating the specifics of the outside network. For example, in the hypothetical local area network pictured to the right, three personal computers and a server are connected to a switch; the server is further connected to a printer and a gateway router, which is connected via a WAN link to the Internet. Depending on whether the diagram is intended for formal or informal use, certain details may be lacking and must be determined from context. For example, the sample diagram does not indicate the physical type of connection between the PCs and the switch, but since a modern LAN is depicted, Ethernet may be assumed. If the same style of line was used in a WAN (wide area network) diagram, however, it may indicate a different type of connection. At different scales diagrams may represent various levels of network granularity. At the LAN level, individual nodes may represent individual physical devices, such as hubs or file servers, while at the WAN level, individual nodes may represent entire cities. In addition, when the scope of a diagram crosses the common LAN/MAN/WAN boundaries, representative hypothetical devices may be depicted instead of showing all actually existing nodes. For example, if a network appliance is intended to be connected through the Internet to many end-user mobile devices, only a single such device may be depicted for the purposes of showing the general relationship between the ap" https://en.wikipedia.org/wiki/Network%20on%20a%20chip,"A network on a chip or network-on-chip (NoC or ) is a network-based communications subsystem on an integrated circuit (""microchip""), most typically between modules in a system on a chip (SoC). The modules on the IC are typically semiconductor IP cores schematizing various functions of the computer system, and are designed to be modular in the sense of network science. The network on chip is a router-based packet switching network between SoC modules. NoC technology applies the theory and methods of computer networking to on-chip communication and brings notable improvements over conventional bus and crossbar communication architectures. Networks-on-chip come in many network topologies, many of which are still experimental as of 2018. In 2000s researchers had started to propose a type of on-chip interconnection in the form of packet switching networks in order to address the scalability issues of bus-based design. Preceding researches proposed the design that routes data packets instead of routing the wires. Then, the concept of ""network on chips"" was proposed in 2002. NoCs improve the scalability of systems-on-chip and the power efficiency of complex SoCs compared to other communication subsystem designs. They are an emerging technology, with projections for large growth in the near future as multicore computer architectures become more common. Structure NoCs can span synchronous and asynchronous clock domains, known as clock domain crossing, or use unclocked asynchronous logic. NoCs support globally asynchronous, locally synchronous electronics architectures, allowing each processor core or functional unit on the System-on-Chip to have its own clock domain. Architectures NoC architectures typically model sparse small-world networks (SWNs) and scale-free networks (SFNs) to limit the number, length, area and power consumption of interconnection wires and point-to-point connections. Topology The topology is the first fundamental aspect of NoC design" https://en.wikipedia.org/wiki/Conway%20chained%20arrow%20notation,"Conway chained arrow notation, created by mathematician John Horton Conway, is a means of expressing certain extremely large numbers. It is simply a finite sequence of positive integers separated by rightward arrows, e.g. . As with most combinatorial notations, the definition is recursive. In this case the notation eventually resolves to being the leftmost number raised to some (usually enormous) integer power. Definition and overview A ""Conway chain"" is defined as follows: Any positive integer is a chain of length . A chain of length n, followed by a right-arrow → and a positive integer, together form a chain of length . Any chain represents an integer, according to the six rules below. Two chains are said to be equivalent if they represent the same integer. Let denote positive integers and let denote the unchanged remainder of the chain. Then: An empty chain (or a chain of length 0) is equal to The chain represents the number . The chain represents the number . The chain represents the number (see Knuth's up-arrow notation) The chain represents the same number as the chain Else, the chain represents the same number as the chain . Properties A chain evaluates to a perfect power of its first number Therefore, is equal to is equivalent to is equal to is equivalent to (not to be confused with ) Interpretation One must be careful to treat an arrow chain as a whole. Arrow chains do not describe the iterated application of a binary operator. Whereas chains of other infixed symbols (e.g. 3 + 4 + 5 + 6 + 7) can often be considered in fragments (e.g. (3 + 4) + 5 + (6 + 7)) without a change of meaning (see associativity), or at least can be evaluated step by step in a prescribed order, e.g. 34567 from right to left, that is not so with Conway's arrow chains. For example: The sixth definition rule is the core: A chain of 4 or more elements ending with 2 or higher becomes a chain of the same length with a (usually vastly) increased penult" https://en.wikipedia.org/wiki/Legendre%27s%20constant,"Legendre's constant is a mathematical constant occurring in a formula constructed by Adrien-Marie Legendre to approximate the behavior of the prime-counting function . The value that corresponds precisely to its asymptotic behavior is now known to be 1. Examination of available numerical data for known values of led Legendre to an approximating formula. Legendre constructed in 1808 the formula where (), as giving an approximation of with a ""very satisfying precision"". Today, one defines the value of such that which is solved by putting provided that this limit exists. Not only is it now known that the limit exists, but also that its value is equal to somewhat less than Legendre's Regardless of its exact value, the existence of the limit implies the prime number theorem. Pafnuty Chebyshev proved in 1849 that if the limit B exists, it must be equal to 1. An easier proof was given by Pintz in 1980. It is an immediate consequence of the prime number theorem, under the precise form with an explicit estimate of the error term (for some positive constant a, where O(…) is the big O notation), as proved in 1899 by Charles de La Vallée Poussin, that B indeed is equal to 1. (The prime number theorem had been proved in 1896, independently by Jacques Hadamard and La Vallée Poussin, but without any estimate of the involved error term). Being evaluated to such a simple number has made the term Legendre's constant mostly only of historical value, with it often (technically incorrectly) being used to refer to Legendre's first guess 1.08366... instead." https://en.wikipedia.org/wiki/List%20of%20misnamed%20theorems,"This is a list of misnamed theorems in mathematics. It includes theorems (and lemmas, corollaries, conjectures, laws, and perhaps even the odd object) that are well known in mathematics, but which are not named for the originator. That is, these items on this list illustrate Stigler's law of eponymy (which is not, of course, due to Stephen Stigler, who credits Robert K Merton). == Applied mathematics == Benford's law. This was first stated in 1881 by Simon Newcomb, and rediscovered in 1938 by Frank Benford. The first rigorous formulation and proof seems to be due to Ted Hill in 1988.; see also the contribution by Persi Diaconis. Bertrand's ballot theorem. This result concerning the probability that the winner of an election was ahead at each step of ballot counting was first published by W. A. Whitworth in 1878, but named after Joseph Louis François Bertrand who rediscovered it in 1887. A common proof uses André's reflection method, though the proof by Désiré André did not use any reflections. Algebra Burnside's lemma. This was stated and proved without attribution in Burnside's 1897 textbook, but it had previously been discussed by Augustin Cauchy, in 1845, and by Georg Frobenius in 1887. Cayley–Hamilton theorem. The theorem was first proved in the easy special case of 2×2 matrices by Cayley, and later for the case of 4×4 matrices by Hamilton. But it was only proved in general by Frobenius in 1878. Hölder's inequality. This inequality was first established by Leonard James Rogers, and published in 1888. Otto Hölder discovered it independently, and published it in 1889. Marden's theorem. This theorem relating the location of the zeros of a complex cubic polynomial to the zeros of its derivative was named by Dan Kalman after Kalman read it in a 1966 book by Morris Marden, who had first written about it in 1945. But, as Marden had himself written, its original proof was by Jörg Siebeck in 1864. Pólya enumeration theorem. This was proven in 1927 in a difficult pape" https://en.wikipedia.org/wiki/Einstein%20notation,"In mathematics, especially the usage of linear algebra in mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916. Introduction Statement of convention According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set , is simplified by the convention to: The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. That is, in this context should be understood as the second component of rather than the square of (this can occasionally lead to ambiguity). The upper index position in is because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see below). Typically, would be equivalent to the traditional . In general relativity, a common convention is that the Greek alphabet is used for space and time components, where indices take on values 0, 1, 2, or 3 (frequently used letters are ), the Latin alphabet is used for spatial components only, where indices take on values 1, 2, or 3 (frequently used letters are ), In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation. An index that is summed over is a summation index, in this case """". I" https://en.wikipedia.org/wiki/LOBSTER,"LOBSTER was a European network monitoring system, based on passive monitoring of traffic on the internet. Its functions were to gather traffic information as a basis for improving internet performance, and to detect security incidents. Objectives To build an advanced pilot European Internet traffic monitoring infrastructure based on passive network monitoring sensors. To develop novel performance and security monitoring applications, enabled by the availability of the passive network monitoring infrastructure, and to develop the appropriate data anonymisation tools for prohibiting unauthorised access or tampering of the original traffic data. History The project originated from SCAMPI, a European project active in 2004–5, aiming to develop a scalable monitoring platform for the Internet. LOBSTER was funded by the European Commission and ceased in 2007. It fed into ""IST 2.3.5 Research Networking testbeds"", which aimed to contribute to improving internet infrastructure in Europe. 36 LOBSTER sensors were deployed in nine countries across Europe by several organisations. At any one time the system could monitor traffic across 2.3 million IP addresses. It was claimed that more than 400,000 Internet attacks were detected by LOBSTER. Passive monitoring LOBSTER was based on passive network traffic monitoring. Instead of collecting flow-level traffic summaries or actively probing the network, passive network monitoring records all IP packets (both headers and payloads) that flow through the monitored link. This enables passive monitoring methods to record complete information about the actual traffic of the network, which allows for tackling monitoring problems more accurately compared to methods based on flow-level statistics or active monitoring. The passive monitoring applications running on the sensors were developed on top of MAPI (Monitoring Application Programming Interface), an expressive programming interface for building network monitoring applications, deve" https://en.wikipedia.org/wiki/Empirical%20software%20engineering,"Empirical software engineering (ESE) is a subfield of software engineering (SE) research that uses empirical research methods to study and evaluate an SE phenomenon of interest. The phenomenon may refer to software development tools/technology, practices, processes, policies, or other human and organizational aspects. ESE has roots in experimental software engineering, but as the field has matured the need and acceptance for both quantitative and qualitative research has grown. Today, common research methods used in ESE for primary and secondary research are the following: Primary research (experimentation, case study research, survey research, simulations in particular software Process simulation) Secondary research methods (Systematic reviews, Systematic mapping studies, rapid reviews, tertiary review) Teaching empirical software engineering Some comprehensive books for students, professionals and researchers interested in ESE are available. Research community Journals, conferences, and communities devoted specifically to ESE: Empirical Software Engineering: An International Journal International Symposium on Empirical Software Engineering and Measurement International Software Engineering Research Network (ISERN)" https://en.wikipedia.org/wiki/Gating%20%28telecommunication%29,"In telecommunication, the term gating has the following meanings: The process of selecting only those portions of a wave between specified time intervals or between specified amplitude limits. The controlling of signals by means of combinational logic elements. A process in which a predetermined set of conditions, when established, permits a second process to occur. Telecommunications engineering Signal processing" https://en.wikipedia.org/wiki/Apodization,"In signal processing, apodization (from Greek ""removing the foot"") is the modification of the shape of a mathematical function. The function may represent an electrical signal, an optical transmission, or a mechanical structure. In optics, it is primarily used to remove Airy disks caused by diffraction around an intensity peak, improving the focus. Apodization in electronics Apodization in signal processing The term apodization is used frequently in publications on Fourier-transform infrared (FTIR) signal processing. An example of apodization is the use of the Hann window in the fast Fourier transform analyzer to smooth the discontinuities at the beginning and end of the sampled time record. Apodization in digital audio An apodizing filter can be used in digital audio processing instead of the more common brick-wall filters, in order to reduce the pre- and post-ringing that the latter introduces. Apodization in mass spectrometry During oscillation within an Orbitrap, ion transient signal may not be stable until the ions settle into their oscillations. Toward the end, subtle ion collisions have added up to cause noticeable dephasing. This presents a problem for the Fourier transformation, as it averages the oscillatory signal across the length of the time-domain measurement. The software allows “apodization”, the removal of the front and back section of the transient signal from consideration in the FT calculation. Thus, apodization improves the resolution of the resulting mass spectrum. Another way to improve the quality of the transient is to wait to collect data until ions have settled into stable oscillatory motion within the trap. Apodization in nuclear magnetic resonance spectroscopy Apodization is applied to NMR signals before discrete Fourier Transformation. Typically, NMR signals are truncated due to time constraints (indirect dimension) or to obtain a higher signal-to-noise ratio. In order to reduce truncation artifacts, the signals are subjected " https://en.wikipedia.org/wiki/Phase%20response,"In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter. Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time. See also Group delay and phase delay" https://en.wikipedia.org/wiki/Cypress%20PSoC,"PSoC (programmable system on a chip) is a family of microcontroller integrated circuits by Cypress Semiconductor. These chips include a CPU core and mixed-signal arrays of configurable integrated analog and digital peripherals. History In 2002, Cypress began shipping commercial quantities of the PSoC 1. To promote the PSoC, Cypress sponsored a ""PSoC Design Challenge"" in Circuit Cellar magazine in 2002 and 2004. In April 2013, Cypress released the fourth generation, PSoC 4. The PSoC 4 features a 32-bit ARM Cortex-M0 CPU, with programmable analog blocks (operational amplifiers and comparators), programmable digital blocks (PLD-based UDBs), programmable routing and flexible GPIO (route any function to any pin), a serial communication block (for SPI, UART, I²C), a timer/counter/PWM block and more. PSoC is used in devices as simple as Sonicare toothbrushes and Adidas sneakers, and as complex as the TiVo set-top box. One PSoC implements capacitive sensing for the touch-sensitive scroll wheel on the Apple iPod click wheel. In 2014, Cypress extended the PSoC 4 family by integrating a Bluetooth Low Energy radio along with a PSoC 4 Cortex-M0-based SoC in a single, monolithic die. In 2016, Cypress released PSoC 4 S-Series, featuring ARM Cortex-M0+ CPU. Overview A PSoC integrated circuit is composed of a core, configurable analog and digital blocks, and programmable routing and interconnect. The configurable blocks in a PSoC are the biggest difference from other microcontrollers. PSoC has three separate memory spaces: paged SRAM for data, Flash memory for instructions and fixed data, and I/O registers for controlling and accessing the configurable logic blocks and functions. The device is created using SONOS technology. PSoC resembles an ASIC: blocks can be assigned a wide range of functions and interconnected on-chip. Unlike an ASIC, there is no special manufacturing process required to create the custom configuration — only startup code that is created by Cypress' " https://en.wikipedia.org/wiki/ION%20LMD,"ION LMD system is one of the laser microdissection systems and a name of device that follows Gravity-Assisted Microdissection method, also known as GAM method. This non-contact laser microdissection system makes cell isolation for further genetic analysis possible. It is the first developed laser microdissection system in Asia. History At first, proto type of ION LMD system was developed in 2004. The first generation of ION LMD was developed in 2005 and then the second generation(so-called G2) was developed in 2008. At last, the third generation(so-called ION LMD Pro) was developed in 2012. Manufacturer JungWoo F&B was founded in 1994, and offers various factory automation products for clients in semiconductor, consumer electronics, LCD, automotive manufacturing and ship-building industries. In 2003, the company entered the bio-mechanics business for the medical laboratory market and developed an ION LMD system which is utilized in cancer research. Awards This ION LMD system has got some reliable awards. 2005 Excellent Machine by Ministry of Commerce, Industry and Energy, Republic of Korea 2005 Best Medical Device by Korean Medical Association 2006 New Excellent Product by Ministry of Commerce, Industry and Energy, Republic of Korea" https://en.wikipedia.org/wiki/Systema%20Naturae,"(originally in Latin written with the ligature æ) is one of the major works of the Swedish botanist, zoologist and physician Carl Linnaeus (1707–1778) and introduced the Linnaean taxonomy. Although the system, now known as binomial nomenclature, was partially developed by the Bauhin brothers, Gaspard and Johann, Linnaeus was first to use it consistently throughout his book. The first edition was published in 1736. The full title of the 10th edition (1758), which was the most important one, was or translated: ""System of nature through the three kingdoms of nature, according to classes, orders, genera and species, with characters, differences, synonyms, places"". The tenth edition of this book (1758) is considered the starting point of zoological nomenclature. In 1766–1768 Linnaeus published the much enhanced 12th edition, the last under his authorship. Another again enhanced work in the same style and titled """" was published by Johann Friedrich Gmelin between 1788 and 1793. Since at least the early 20th century, zoologists have commonly recognized this as the last edition belonging to this series. Overview Linnaeus (later known as ""Carl von Linné"", after his ennoblement in 1761) published the first edition of in the year 1735, during his stay in the Netherlands. As was customary for the scientific literature of its day, the book was published in Latin. In it, he outlined his ideas for the hierarchical classification of the natural world, dividing it into the animal kingdom (), the plant kingdom (), and the ""mineral kingdom"" (). Linnaeus's Systema Naturae lists only about 10,000 species of organisms, of which about 6,000 are plants and 4,236 are animals. According to the historian of botany William T. Stearn, ""Even in 1753 he believed that the number of species of plants in the whole world would hardly reach 10,000; in his whole career he named about 7,700 species of flowering plants."" Linnaeus developed his classification of the plant kingdom in an attempt to " https://en.wikipedia.org/wiki/Taste,"The gustatory system or sense of taste is the sensory system that is partially responsible for the perception of taste (flavor). Taste is the perception stimulated when a substance in the mouth reacts chemically with taste receptor cells located on taste buds in the oral cavity, mostly on the tongue. Taste, along with the sense of smell and trigeminal nerve stimulation (registering texture, pain, and temperature), determines flavors of food and other substances. Humans have taste receptors on taste buds and other areas, including the upper surface of the tongue and the epiglottis. The gustatory cortex is responsible for the perception of taste. The tongue is covered with thousands of small bumps called papillae, which are visible to the naked eye. Within each papilla are hundreds of taste buds. The exception to this is the filiform papillae that do not contain taste buds. There are between 2000 and 5000 taste buds that are located on the back and front of the tongue. Others are located on the roof, sides and back of the mouth, and in the throat. Each taste bud contains 50 to 100 taste receptor cells. Taste receptors in the mouth sense the five basic tastes: sweetness, sourness, saltiness, bitterness, and savoriness (also known as savory or umami). Scientific experiments have demonstrated that these five tastes exist and are distinct from one another. Taste buds are able to tell different tastes apart when they interact with different molecules or ions. Sweetness, savoriness, and bitter tastes are triggered by the binding of molecules to G protein-coupled receptors on the cell membranes of taste buds. Saltiness and sourness are perceived when alkali metals or hydrogen ions meet taste buds, respectively. The basic tastes contribute only partially to the sensation and flavor of food in the mouth—other factors include smell, detected by the olfactory epithelium of the nose; texture, detected through a variety of mechanoreceptors, muscle nerves, etc.; temperature, det" https://en.wikipedia.org/wiki/Knuth%27s%20up-arrow%20notation,"In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976. In his 1947 paper, R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc. Various notations have been used to represent hyperoperations. One such notation is . Knuth's up-arrow notation is another. For example: the single arrow represents exponentiation (iterated multiplication) the double arrow represents tetration (iterated exponentiation) the triple arrow represents pentation (iterated tetration) The general definition of the up-arrow notation is as follows (for ): Here, stands for n arrows, so for example The square brackets are another notation for hyperoperations. Introduction The hyperoperations naturally extend the arithmetical operations of addition and multiplication as follows. Addition by a natural number is defined as iterated incrementation: Multiplication by a natural number is defined as iterated addition: For example, Exponentiation for a natural power is defined as iterated multiplication, which Knuth denoted by a single up-arrow: For example, Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”: For example, Expressions are evaluated from right to left, as the operators are defined to be right-associative. According to this definition, etc. This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here. Pentation, defined as iterated tetration, is represented by the “triple arrow”: Hexation, defined as iterated pentation, is " https://en.wikipedia.org/wiki/List%20of%20finite-dimensional%20Nichols%20algebras,"In mathematics, a Nichols algebra is a Hopf algebra in a braided category assigned to an object V in this category (e.g. a braided vector space). The Nichols algebra is a quotient of the tensor algebra of V enjoying a certain universal property and is typically infinite-dimensional. Nichols algebras appear naturally in any pointed Hopf algebra and enabled their classification in important cases. The most well known examples for Nichols algebras are the Borel parts of the infinite-dimensional quantum groups when q is no root of unity, and the first examples of finite-dimensional Nichols algebras are the Borel parts of the Frobenius–Lusztig kernel (small quantum group) when q is a root of unity. The following article lists all known finite-dimensional Nichols algebras where is a Yetter–Drinfel'd module over a finite group , where the group is generated by the support of . For more details on Nichols algebras see Nichols algebra. There are two major cases: abelian, which implies is diagonally braided . nonabelian. The rank is the number of irreducible summands in the semisimple Yetter–Drinfel'd module . The irreducible summands are each associated to a conjugacy class and an irreducible representation of the centralizer . To any Nichols algebra there is by attached a generalized root system and a Weyl groupoid. These are classified in. In particular several Dynkin diagrams (for inequivalent types of Weyl chambers). Each Dynkin diagram has one vertex per irreducible and edges depending on their braided commutators in the Nichols algebra. The Hilbert series of the graded algebra is given. An observation is that it factorizes in each case into polynomials . We only give the Hilbert series and dimension of the Nichols algebra in characteristic . Note that a Nichols algebra only depends on the braided vector space and can therefore be realized over many different groups. Sometimes there are two or three Nichols algebras with different and non" https://en.wikipedia.org/wiki/Steinhaus%20longimeter,"The Steinhaus longimeter, patented by the professor Hugo Steinhaus, is an instrument used to measure the lengths of curves on maps. Description It is a transparent sheet of three grids, turned against each other by 30 degrees, each consisting of parallel lines spaced at equal distances 3.82 mm. The measurement is done by counting crossings of the curve with grid lines. The number of crossings is the approximate length of the curve in millimetres. The design of the Steinhaus longimeter can be seen as an application of the Crofton formula, according to which the length of a curve equals the expected number of times it is crossed by a random line. See also Opisometer, a mechanical device for measuring curve length by rolling a small wheel along the curve Dot planimeter, a similar transparency-based device for estimating area, based on Pick's theorem" https://en.wikipedia.org/wiki/List%20of%20shapes%20with%20known%20packing%20constant,"The packing constant of a geometric body is the largest average density achieved by packing arrangements of congruent copies of the body. For most bodies the value of the packing constant is unknown. The following is a list of bodies in Euclidean spaces whose packing constant is known. Fejes Tóth proved that in the plane, a point symmetric body has a packing constant that is equal to its translative packing constant and its lattice packing constant. Therefore, any such body for which the lattice packing constant was previously known, such as any ellipse, consequently has a known packing constant. In addition to these bodies, the packing constants of hyperspheres in 8 and 24 dimensions are almost exactly known." https://en.wikipedia.org/wiki/Reverse%20engineering,"Reverse engineering (also known as backwards engineering or back engineering) is a process or method through which one attempts to understand through deductive reasoning how a previously made device, process, system, or piece of software accomplishes a task with very little (if any) insight into exactly how it does so. Depending on the system under consideration and the technologies employed, the knowledge gained during reverse engineering can help with repurposing obsolete objects, doing security analysis, or learning how something works. Although the process is specific to the object on which it is being performed, all reverse engineering processes consist of three basic steps: information extraction, modeling, and review. Information extraction is the practice of gathering all relevant information for performing the operation. Modeling is the practice of combining the gathered information into an abstract model, which can be used as a guide for designing the new object or system. Review is the testing of the model to ensure the validity of the chosen abstract. Reverse engineering is applicable in the fields of computer engineering, mechanical engineering, design, electronic engineering, software engineering, chemical engineering, and systems biology. Overview There are many reasons for performing reverse engineering in various fields. Reverse engineering has its origins in the analysis of hardware for commercial or military advantage. However, the reverse engineering process may not always be concerned with creating a copy or changing the artifact in some way. It may be used as part of an analysis to deduce design features from products with little or no additional knowledge about the procedures involved in their original production. In some cases, the goal of the reverse engineering process can simply be a redocumentation of legacy systems. Even when the reverse-engineered product is that of a competitor, the goal may not be to copy it but to perform competit" https://en.wikipedia.org/wiki/Lab-on-a-chip,"A lab-on-a-chip (LOC) is a device that integrates one or several laboratory functions on a single integrated circuit (commonly called a ""chip"") of only millimeters to a few square centimeters to achieve automation and high-throughput screening. LOCs can handle extremely small fluid volumes down to less than pico-liters. Lab-on-a-chip devices are a subset of microelectromechanical systems (MEMS) devices and sometimes called ""micro total analysis systems"" (µTAS). LOCs may use microfluidics, the physics, manipulation and study of minute amounts of fluids. However, strictly regarded ""lab-on-a-chip"" indicates generally the scaling of single or multiple lab processes down to chip-format, whereas ""µTAS"" is dedicated to the integration of the total sequence of lab processes to perform chemical analysis. History After the invention of microtechnology (~1954) for realizing integrated semiconductor structures for microelectronic chips, these lithography-based technologies were soon applied in pressure sensor manufacturing (1966) as well. Due to further development of these usually CMOS-compatibility limited processes, a tool box became available to create micrometre or sub-micrometre sized mechanical structures in silicon wafers as well: the microelectromechanical systems (MEMS) era had started. Next to pressure sensors, airbag sensors and other mechanically movable structures, fluid handling devices were developed. Examples are: channels (capillary connections), mixers, valves, pumps and dosing devices. The first LOC analysis system was a gas chromatograph, developed in 1979 by S.C. Terry at Stanford University. However, only at the end of the 1980s and beginning of the 1990s did the LOC research start to seriously grow as a few research groups in Europe developed micropumps, flowsensors and the concepts for integrated fluid treatments for analysis systems. These µTAS concepts demonstrated that integration of pre-treatment steps, usually done at lab-scale, could extend t" https://en.wikipedia.org/wiki/Anthropology%20of%20food,"Anthropology of food is a sub-discipline of anthropology that connects an ethnographic and historical perspective with contemporary social issues in food production and consumption systems. Although early anthropological accounts often dealt with cooking and eating as part of ritual or daily life, food was rarely regarded as the central point of academic focus. This changed in the later half of the 20th century, when foundational work by Mary Douglas, Marvin Harris, Arjun Appadurai, Jack Goody, and Sidney Mintz cemented the study of food as a key insight into modern social life. Mintz is known as the ""Father of food anthropology"" for his 1985 work Sweetness and Power, which linked British demand for sugar with the creation of empire and exploitative industrial labor conditions. Research has traced the material and symbolic importance of food, as well as how they intersect. Examples of ongoing themes are food as a form of differentiation, commensality, and food's role in industrialization and globalizing labor and commodity chains. Several related and interdisciplinary academic programs exist in the US and UK (listed under Food studies institutions). ""Anthropology of food"" is also the name of a scientific journal dedicated to a social analysis of food practices and representations. Created in 1999 (first issue published in 2001), it is multilingual (English, French, Spanish, Portuguese). It is OpenAccess, and accessible through the portal OpenEdition Journals. It complies with academic standards for scientific journals (double-blind peer-review). It publishes a majority of papers in social anthropology, but is also open to contributions from historians, geographers, philosophers, economists. The first issues published include: 16 | 2022 Feeding genders 15 | 2021 Aesthetics, gestures and tastes in South and East Asia: crossed approaches on culinary arts 14 | 2019 Gastro-politics: Culture, Identity and Culinary Politics in Peru 13 | 2018 Tourism and Gastronomy" https://en.wikipedia.org/wiki/Nominal%20level,"Nominal level is the operating level at which an electronic signal processing device is designed to operate. The electronic circuits that make up such equipment are limited in the maximum signal they can handle and the low-level internally generated electronic noise they add to the signal. The difference between the internal noise and the maximum level is the device's dynamic range. The nominal level is the level that these devices were designed to operate at, for best dynamic range and adequate headroom. When a signal is chained with improper gain staging through many devices, clipping may occur or the system may operate with reduced dynamic range. In audio, a related measurement, signal-to-noise ratio, is usually defined as the difference between the nominal level and the noise floor, leaving the headroom as the difference between nominal and maximum output. It is important to realize that the measured level is a time average, meaning that the peaks of audio signals regularly exceed the measured average level. The headroom measurement defines how far the peak levels can stray from the nominal measured level before clipping. The difference between the peaks and the average for a given signal is the crest factor. Standards VU meters are designed to represent the perceived loudness of a passage of music, or other audio content, measuring in volume units. Devices are designed so that the best signal quality is obtained when the meter rarely goes above nominal. The markings are often in dB instead of ""VU"", and the reference level should be defined in the device's manual. In most professional recording and sound reinforcement equipment, the nominal level is . In semi-professional and domestic equipment, the nominal level is usually −10 dBV. This difference is due to the cost required to create larger power supplies and output higher levels. In broadcasting equipment, this is termed the Maximum Permitted Level, which is defined by European Broadcasting Union stand" https://en.wikipedia.org/wiki/Magnesium%20in%20biology,"Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA. Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA. In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis. Function A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments. Human health Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time. " https://en.wikipedia.org/wiki/Tsunami%20UDP%20Protocol,"The Tsunami UDP Protocol is a UDP-based protocol that was developed for high-speed file transfer over network paths that have a high bandwidth-delay product. Such protocols are needed because standard TCP does not perform well over paths with high bandwidth-delay products. Tsunami was developed at the Advanced Network Management Laboratory of Indiana University. Tsunami effects a file transfer by chunking the file into numbered blocks of 32 kilobyte. Communication between the client and server applications flows over a low bandwidth TCP connection, and the bulk data is transferred over UDP." https://en.wikipedia.org/wiki/Recurrence%20plot,"In descriptive statistics and chaos theory, a recurrence plot (RP) is a plot showing, for each moment in time, the times at which the state of a dynamical system returns to the previous state at , i.e., when the phase space trajectory visits roughly the same area in the phase space as at time . In other words, it is a plot of showing on a horizontal axis and on a vertical axis, where is the state of the system (or its phase space trajectory). Background Natural processes can have a distinct recurrent behaviour, e.g. periodicities (as seasonal or Milankovich cycles), but also irregular cyclicities (as El Niño Southern Oscillation, heart beat intervals). Moreover, the recurrence of states, in the meaning that states are again arbitrarily close after some time of divergence, is a fundamental property of deterministic dynamical systems and is typical for nonlinear or chaotic systems (cf. Poincaré recurrence theorem). The recurrence of states in nature has been known for a long time and has also been discussed in early work (e.g. Henri Poincaré 1890). Detailed description One way to visualize the recurring nature of states by their trajectory through a phase space is the recurrence plot, introduced by Eckmann et al. (1987). Often, the phase space does not have a low enough dimension (two or three) to be pictured, since higher-dimensional phase spaces can only be visualized by projection into the two or three-dimensional sub-spaces. However, making a recurrence plot enables us to investigate certain aspects of the m-dimensional phase space trajectory through a two-dimensional representation. At a recurrence the trajectory returns to a location in phase space it has visited before up to a small error (i.e., the system returns to a state that it has before). The recurrence plot represents the collection of pairs of times such recurrences, i.e., the set of with , with and discrete points of time and the state of the system at time (location of the trajectory " https://en.wikipedia.org/wiki/Slashdot%20effect,"The Slashdot effect, also known as slashdotting, occurs when a popular website links to a smaller website, causing a massive increase in traffic. This overloads the smaller site, causing it to slow down or even temporarily become unavailable. Typically, less robust sites are unable to cope with the huge increase in traffic and become unavailable – common causes are lack of sufficient data bandwidth, servers that fail to cope with the high number of requests, and traffic quotas. Sites that are maintained on shared hosting services often fail when confronted with the Slashdot effect. This has the same effect as a denial-of-service attack, albeit accidentally. The name stems from the huge influx of web traffic which would result from the technology news site Slashdot linking to websites. The term flash crowd is a more generic term. The original circumstances have changed, as flash crowds from Slashdot were reported in 2005 to be diminishing due to competition from similar sites, and the general adoption of elastically scalable cloud hosting platforms. Terminology The term ""Slashdot effect"" refers to the phenomenon of a website becoming virtually unreachable because too many people are hitting it after the site was mentioned in an interesting article on the popular Slashdot news service. It was later extended to describe any similar effect from being listed on a popular site, similar to the more generic term, flash crowd, which is a more appropriate term. The term ""flash crowd"" was coined in 1973 by Larry Niven in his science fiction short story, Flash Crowd. It predicted that a consequence of inexpensive teleportation would be huge crowds materializing almost instantly at the sites of interesting news stories. Twenty years later, the term became commonly used on the Internet to describe exponential spikes in website or server usage when it passes a certain threshold of popular interest. This effect was anticipated years earlier in 1956 in Alfred Bester's novel The" https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems,"The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel. General These computer algebra systems are sometimes combined with ""front end"" programs that provide a better user interface, such as the general-purpose GNU TeXmacs. Functionality Below is a summary of significantly developed symbolic functionality in each of the systems. via SymPy
  • via qepcad optional package Those which do not ""edit equations"" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed. Operating system support The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available. Graphing calculators Some graphing calculators have CAS features. See also :Category:Computer algebra systems Comparison of numerical-analysis software Comparison of statistical packages List of information graphics software List of numerical-analysis software List of numerical libraries List of statistical software Mathematical software Web-based simulation" https://en.wikipedia.org/wiki/Novel%20food,"A novel food is a type of food that does not have a significant history of consumption or is produced by a method that has not previously been used for food. Designer food Designer food is a type of novel food that has not existed on any regional or global consumer market before. Instead it has been ""designed"" using biotechnological / bioengineering methods (e.g. genetically modified food) or ""enhanced"" using engineered additives. Examples like designer egg, designer milk, designer grains, probiotics, and enrichment with micro- and macronutrients and designer proteins have been cited. The enhancement process is called food fortification or nutrification. Designer novel food often comes with sometimes unproven health claims (""superfoods""). Designer food is distinguished from food design, the aesthetic arrangement of food items for marketing purposes. European Union Novel foods or novel food ingredients have no history of ""significant"" consumption in the European Union prior to 15 May 1997. Any food or food ingredient that falls within this definition must be authorised according to the Novel Food legislation, Regulation (EC) No 258/97 of the European Parliament and of the Council. Applicants can consult the guidance document compiled by the European Commission, which highlights the scientific information and the safety assessment report required in each case. The Novel Food regulation stipulates that foods and food ingredients falling within the scope of this regulation must not: present a danger for the consumer; mislead the consumer; or differ from foods or food ingredients which they are intended to replace to such an extent that their normal consumption would be nutritionally disadvantageous for the consumer. There are two possible routes for authorization under the Novel Food legislation: a full application and a simplified application. The simplified application route is only applicable where the EU member national competent authority, e.g. Food Standard" https://en.wikipedia.org/wiki/Planimeter,"A planimeter, also known as a platometer, is a measuring instrument used to determine the area of an arbitrary two-dimensional shape. Construction There are several kinds of planimeters, but all operate in a similar way. The precise way in which they are constructed varies, with the main types of mechanical planimeter being polar, linear, and Prytz or ""hatchet"" planimeters. The Swiss mathematician Jakob Amsler-Laffon built the first modern planimeter in 1854, the concept having been pioneered by Johann Martin Hermann in 1814. Many developments followed Amsler's famous planimeter, including electronic versions. The Amsler (polar) type consists of a two-bar linkage. At the end of one link is a pointer, used to trace around the boundary of the shape to be measured. The other end of the linkage pivots freely on a weight that keeps it from moving. Near the junction of the two links is a measuring wheel of calibrated diameter, with a scale to show fine rotation, and worm gearing for an auxiliary turns counter scale. As the area outline is traced, this wheel rolls on the surface of the drawing. The operator sets the wheel, turns the counter to zero, and then traces the pointer around the perimeter of the shape. When the tracing is complete, the scales at the measuring wheel show the shape's area. When the planimeter's measuring wheel moves perpendicular to its axis, it rolls, and this movement is recorded. When the measuring wheel moves parallel to its axis, the wheel skids without rolling, so this movement is ignored. That means the planimeter measures the distance that its measuring wheel travels, projected perpendicularly to the measuring wheel's axis of rotation. The area of the shape is proportional to the number of turns through which the measuring wheel rotates. The polar planimeter is restricted by design to measuring areas within limits determined by its size and geometry. However, the linear type has no restriction in one dimension, because it can roll. Its " https://en.wikipedia.org/wiki/Diebold%2010xx,"The Diebold 10xx (or Modular Delivery System, MDS) series is a third and fourth generation family of automated teller machines manufactured by Diebold. History Introduced in 1985 as a successor to the TABS 9000 series, the 10xx family of ATMs was re-styled to the ""i Series"" variant in 1991, the ""ix Series"" variant in 1994, and finally replaced by the Diebold Opteva series of ATMs in 2003. The 10xx series of ATMs were also marketed under the InterBold brand; a joint venture between IBM and Diebold. IBM machines were marketed under the IBM 478x series. Not all of the 10xx series of ATMs were offered by IBM. Diebold stopped producing the 1000-series ATM's around 2008. Listing of 10xx Series Models Members of the 10xx Series included: MDS Series - Used a De La Rue cash dispensing mechanism 1060 - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062 - Multi-function, indoor lobby unit 1072 - Multi-function, exterior ""through-the-wall"" unit i Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1060i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1061i - Mono-function, indoor counter-top unit with single cash cartridge cash dispenser 1062i - Multi-function, indoor lobby unit 1064i - Mono-function, indoor cash dispenser 1070i - Multi-function, exterior ""through-the-wall"" unit with a longer ""top-hat throat"" 1072i - Multi-function, exterior ""through-the-wall"" unit 1073i - Multi-function, exterior ""through-the-wall"" unit, modified for use while sitting in a car 1074i - Multi-function, exterior unit, designed as a stand-alone unit for use in a drive-up lane. ix Series - Used an ExpressBus Multi Media Dispenser (MMD) cash dispensing mechanism 1062ix - Multi-function, indoor lobby unit 1063ix - Mono-function, indoor cash dispenser with a smaller screen than the 1064ix 1064ix - Mono-function, indoor cash dispenser 1070ix - Multi-function, exterior ""through-the-wall"" unit 1071" https://en.wikipedia.org/wiki/Responsiveness,"Responsiveness as a concept of computer science refers to the specific ability of a system or functional unit to complete assigned tasks within a given time. For example, it would refer to the ability of an artificial intelligence system to understand and carry out its tasks in a timely fashion. In the Reactive principle, Responsiveness is one of the fundamental criteria along with resilience, elasticity and message driven. It is one of the criteria under the principle of robustness (from a v principle). The other three are observability, recoverability, and task conformance. Vs performance Software which lacks a decent process management can have poor responsiveness even on a fast machine. On the other hand, even slow hardware can run responsive software. It is much more important that a system actually spend the available resources in the best way possible. For instance, it makes sense to let the mouse driver run at a very high priority to provide fluid mouse interactions. For long-term operations, such as copying, downloading or transforming big files the most important factor is to provide good user-feedback and not the performance of the operation since it can quite well run in the background, using only spare processor time. Delays Long delays can be a major cause of user frustration, or can lead the user to believe the system is not functioning, or that a command or input gesture has been ignored. Responsiveness is therefore considered an essential usability issue for human-computer-interaction (HCI). The rationale behind the responsiveness principle is that the system should deliver results of an operation to users in a timely and organized manner. The frustration threshold can be quite different, depending on the situation and the fact that user interface depends on local or remote systems to show a visible response. There are at least three user tolerance thresholds (i.e.): 0.1 seconds under 0.1 seconds the response is perceived as instantaneous" https://en.wikipedia.org/wiki/Intersex%20%28biology%29,"Intersex is a general term for an organism that has sex characteristics that are between male and female. It typically applies to a minority of members of gonochoric animal species such as mammals (as opposed to hermaphroditic species in which the majority of members can have both male and female sex characteristics). Such organisms are usually sterile. Intersexuality can occur due to both genetic and environmental factors and has been reported in mammals, fishes, nematodes, and crustaceans. Mammals Intersex can also occur in non-human mammals such as pigs, with it being estimated that 0.1% to 1.4% of pigs are intersex. In Vanuatu, Narave pigs are sacred intersex pigs that are found on Malo Island. An analysis of Navare pig mitochondrial DNA by Lum et al. (2006) found that they are descended from Southeast Asian pigs. At least six different mole species have an intersex adaption where by the female mole has an ovotestis, ""a hybrid organ made up of both ovarian and testicular tissue. This effectively makes them intersex, giving them an extra dose of testosterone to make them just as muscular and aggressive as male moles"". The ovarian part of the ovotestis is reproductively functional. Intersexuality in humans is relatively rare. Depending on the definition, the prevalence of intersex among humans have been reported to range from 0.018% to up to 1.7% of humans. Nematodes Intersex is known to occur in all main groups of nematodes. Most of them are functionally female. Male intersexes with female characteristics have been reported but are less common. Fishes Gonadal intersex also occurs in fishes, where the individual has both ovarian and testicular tissue. Although it is a rare anomaly among gonochoric fishes, it is a transitional state in fishes that are protandric or protogynous. Intersexuality has been reported in 23 fish families. Crustaceans The oldest evidence for intersexuality in crustaceans comes from fossils dating back 70 million years ago. Inte" https://en.wikipedia.org/wiki/European%20Union%20food%20quality%20scandal,"The European Union food quality scandal is a controversy claiming that certain food brands and items targeted at Central and Eastern European Union countries' markets are of lower quality than their exact equivalent produced for the Western European Union markets. European Commission President Jean-Claude Juncker acknowledged the issue in his State of the Union address pledging funding to help national food authorities test the inferior products and start to tackle the food inequality. In April 2018 EU Justice and Consumers Commissioner Věra Jourová stated that ""“We will step up the fight against dual food quality. We have amended the Unfair Commercial Practice Directive to make it black and white that dual food quality is forbidden.""" https://en.wikipedia.org/wiki/Energy%20%28signal%20processing%29,"In signal processing, the energy of a continuous-time signal x(t) is defined as the area under the squared magnitude of the considered signal i.e., mathematically Unit of will be (unit of signal)2. And the energy of a discrete-time signal x(n) is defined mathematically as Relationship to energy in physics Energy in this context is not, strictly speaking, the same as the conventional notion of energy in physics and the other sciences. The two concepts are, however, closely related, and it is possible to convert from one to the other: where Z represents the magnitude, in appropriate units of measure, of the load driven by the signal. For example, if x(t) represents the potential (in volts) of an electrical signal propagating across a transmission line, then Z would represent the characteristic impedance (in ohms) of the transmission line. The units of measure for the signal energy would appear as volt2·seconds, which is not dimensionally correct for energy in the sense of the physical sciences. After dividing by Z, however, the dimensions of E would become volt2·seconds per ohm, which is equivalent to joules, the SI unit for energy as defined in the physical sciences. Spectral energy density Similarly, the spectral energy density of signal x(t) is where X(f) is the Fourier transform of x(t). For example, if x(t) represents the magnitude of the electric field component (in volts per meter) of an optical signal propagating through free space, then the dimensions of X(f) would become volt·seconds per meter and would represent the signal's spectral energy density (in volts2·second2 per meter2) as a function of frequency f (in hertz). Again, these units of measure are not dimensionally correct in the true sense of energy density as defined in physics. Dividing by Zo, the characteristic impedance of free space (in ohms), the dimensions become joule-seconds per meter2 or, equivalently, joules per meter2 per hertz, which is dimensionally correct in SI" https://en.wikipedia.org/wiki/Secure%20transmission,"In computer science, secure transmission refers to the transfer of data such as confidential or proprietary information over a secure channel. Many secure transmission methods require a type of encryption. The most common email encryption is called PKI. In order to open the encrypted file, an exchange of key is done. Many infrastructures such as banks rely on secure transmission protocols to prevent a catastrophic breach of security. Secure transmissions are put in place to prevent attacks such as ARP spoofing and general data loss. Software and hardware implementations which attempt to detect and prevent the unauthorized transmission of information from the computer systems to an organization on the outside may be referred to as Information Leak Detection and Prevention (ILDP), Information Leak Prevention (ILP), Content Monitoring and Filtering (CMF) or Extrusion Prevention systems and are used in connection with other methods to ensure secure transmission of data. Secure transmission over wireless infrastructure WEP is a deprecated algorithm to secure IEEE 802.11 wireless networks. Wireless networks broadcast messages using radio, so are more susceptible to eavesdropping than wired networks. When introduced in 1999, WEP was intended to provide confidentiality comparable to that of a traditional wired network. A later system, called Wi-Fi Protected Access (WPA) has since been developed to provide stronger security. Web-based secure transmission Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols that provide secure communications on the Internet for such things as web browsing, e-mail, Internet faxing, instant messaging and other data transfers. There are slight differences between SSL and TLS, but they are substantially the same." https://en.wikipedia.org/wiki/Stretchable%20electronics,"Stretchable electronics, also known as elastic electronics or elastic circuits, is a group of technologies for building electronic circuits by depositing or embedding electronic devices and circuits onto stretchable substrates such as silicones or polyurethanes, to make a completed circuit that can experience large strains without failure. In the simplest case, stretchable electronics can be made by using the same components used for rigid printed circuit boards, with the rigid substrate cut (typically in a serpentine pattern) to enable in-plane stretchability. However, many researchers have also sought intrinsically stretchable conductors, such as liquid metals. One of the major challenges in this domain is designing the substrate and the interconnections to be stretchable, rather than flexible (see Flexible electronics) or rigid (Printed Circuit Boards). Typically, polymers are chosen as substrates or material to embed. When bending the substrate, the outermost radius of the bend will stretch (see Strain in an Euler–Bernoulli beam, subjecting the interconnects to high mechanical strain. Stretchable electronics often attempts biomimicry of human skin and flesh, in being stretchable, whilst retaining full functionality. The design space for products is opened up with stretchable electronics, including sensitive electronic skin for robotic devices and in vivo implantable sponge-like electronics. Strechable Skin electronics Mechanical Properties of Skin Skin is composed of collagen, keratin, and elastin fibers, which provide robust mechanical strength, low modulus, tear resistance, and softness. The skin can be considered as a bilayer of epidermis and dermis. The epidermal layer has a modulus of about 140-600 kPa and a thickness of 0.05-1.5 mm. Dermis has a modulus of 2-80 kPa and a thickness of 0.3–3 mm. This bilayer skin exhibits an elastic linear response for strains less than 15% and a non linear response at larger strains. To achieve conformability, it is p" https://en.wikipedia.org/wiki/Systematic%20Census%20of%20Australian%20Plants,"The Systematic census of Australian plants, with chronologic, literary and geographic annotations, more commonly known as the Systematic Census of Australian Plants, also known by its standard botanic abbreviation Syst. Census Austral. Pl., is a survey of the vascular flora of Australia prepared by Government botanist for the state of Victoria Ferdinand von Mueller and published in 1882. Von Mueller describes the development of the census in the preface of the volume as an extension of the seven volumes of the Flora Australiensis written by George Bentham. A new flora was necessary since as more areas of Australia were explored and settled, the flora of the island-continent became better collected and described. The first census increased the number of described species from the 8125 in Flora Australiensis to 8646. The book records all the known species indigenous to Australia and Norfolk Island; with records of species distribution. Von Mueller noted that by 1882 it had become difficult to distinguish some introduced species from native ones: The lines of demarkation between truly indigenous and more recently immigrated plants can no longer in all cases be drawn with precision; but whereas Alchemilla vulgaris and Veronica serpyllifolia were found along with several European Carices in untrodden parts of the Australian Alps during the author's earliest explorations, Alchemilla arvensis and Veronica peregrina were at first only noticed near settlements. The occurrence of Arabis glabra, Geum urbanum, Agiimonia eupatoria, Eupatorium cannabinum, Cavpesium cernuum and some others may therefore readily be disputed as indigenous, and some questions concerning the nativity of various of our plants will probably remain for ever involved in doubts. In 1889 an updated edition of the census was published, the Second Systematic Census increased the number of described species to 8839. Von Mueller dedicated both works to Joseph Dalton Hooker and Augustin Pyramus de Candolle. " https://en.wikipedia.org/wiki/Sensitivity%20index,"The sensitivity index or discriminability index or detectability index is a dimensionless statistic used in signal detection theory. A higher index indicates that the signal can be more readily detected. Definition The discriminability index is the separation between the means of two distributions (typically the signal and the noise distributions), in units of the standard deviation. Equal variances/covariances For two univariate distributions and with the same standard deviation, it is denoted by ('dee-prime'): . In higher dimensions, i.e. with two multivariate distributions with the same variance-covariance matrix , (whose symmetric square-root, the standard deviation matrix, is ), this generalizes to the Mahalanobis distance between the two distributions: , where is the 1d slice of the sd along the unit vector through the means, i.e. the equals the along the 1d slice through the means. For two bivariate distributions with equal variance-covariance, this is given by: , where is the correlation coefficient, and here and , i.e. including the signs of the mean differences instead of the absolute. is also estimated as . Unequal variances/covariances When the two distributions have different standard deviations (or in general dimensions, different covariance matrices), there exist several contending indices, all of which reduce to for equal variance/covariance. Bayes discriminability index This is the maximum (Bayes-optimal) discriminability index for two distributions, based on the amount of their overlap, i.e. the optimal (Bayes) error of classification by an ideal observer, or its complement, the optimal accuracy : , where is the inverse cumulative distribution function of the standard normal. The Bayes discriminability between univariate or multivariate normal distributions can be numerically computed (Matlab code), and may also be used as an approximation when the distributions are close to normal. is a positive-definite statistical d" https://en.wikipedia.org/wiki/Retort%20pouch,"A retort pouch or retortable pouch is a type of food packaging made from a laminate of flexible plastic and metal foils. It allows the sterile packaging of a wide variety of food and drink handled by aseptic processing, and is used as an alternative to traditional industrial canning methods. Retort pouches are used in field rations, space food, fish products, camping food, instant noodles, and brands such as Capri-Sun and Tasty Bite. Some varieties have a bottom gusset and are known as stand-up pouches. Origin The retort pouch was invented by the United States Army Natick Soldier Research, Development and Engineering Center, Reynolds Metals Company, and Continental Flexible Packaging, who jointly received the Food Technology Industrial Achievement Award for its invention in 1978. Construction A retort pouch is constructed from a flexible metal-plastic laminate that is able to withstand the thermal processing used for sterilization. The food is first prepared, either raw or cooked, and then sealed into the retort pouch. The pouch is then heated to 240-250 °F (116-121 °C) for several minutes under high pressure inside a retort or autoclave machine. The food inside is cooked in a similar way to pressure cooking. This process reliably kills all commonly occurring microorganisms (particularly Clostridium botulinum), preventing it from spoiling. The packaging process is very similar to canning, except that the package itself is flexible. The lamination structure does not allow permeation of gases from outside into the pouch. The retort pouch construction varies from one application to another, as a liquid product needs different barrier properties than a dry product, and similarly an acidic product needs different chemical resistance than a basic product. Some different layers used in retort pouches include: polyester (PET) – provides a gloss and rigid layer, may be printed inside nylon (bi-oriented polyamide) – provides puncture resistance aluminum (Al) – provides" https://en.wikipedia.org/wiki/Sznajd%20model,"The Sznajd model or United we stand, divided we fall (USDF) model is a sociophysics model introduced in 2000 to gain fundamental understanding about opinion dynamics. The Sznajd model implements a phenomenon called social validation and thus extends the Ising spin model. In simple words, the model states: Social validation: If two people share the same opinion, their neighbors will start to agree with them. Discord destroys: If a block of adjacent persons disagree, their neighbors start to argue with them. Mathematical formulation For simplicity, one assumes that each individual  has an opinion Si which might be Boolean ( for no, for yes) in its simplest formulation, which means that each individual either agrees or disagrees to a given question. In the original 1D-formulation, each individual has exactly two neighbors just like beads on a bracelet. At each time step a pair of individual and is chosen at random to change their nearest neighbors' opinion (or: Ising spins) and according to two dynamical rules: If then and . This models social validation, if two people share the same opinion, their neighbors will change their opinion. If then and . Intuitively: If the given pair of people disagrees, both adopt the opinion of their other neighbor. Findings for the original formulations In a closed (1 dimensional) community, two steady states are always reached, namely complete consensus (which is called ferromagnetic state in physics) or stalemate (the antiferromagnetic state). Furthermore, Monte Carlo simulations showed that these simple rules lead to complicated dynamics, in particular to a power law in the decision time distribution with an exponent of -1.5. Modifications The final (antiferromagnetic) state of alternating all-on and all-off is unrealistic to represent the behavior of a community. It would mean that the complete population uniformly changes their opinion from one time step to the next. For this reason an alternative dynamical ru" https://en.wikipedia.org/wiki/Starch%20gelatinization,"Starch gelatinization is a process of breaking down of intermolecular bonds of starch molecules in the presence of water and heat, allowing the hydrogen bonding sites (the hydroxyl hydrogen and oxygen) to engage more water. This irreversibly dissolves the starch granule in water. Water acts as a plasticizer. Gelatinization Process Three main processes happen to the starch granule: granule swelling, crystallite and double-helical melting, and amylose leaching. Granule swelling: During heating, water is first absorbed in the amorphous space of starch, which leads to a swelling phenomenon. Melting of double helical structures: Water then enters via amorphous regions into the tightly bound areas of double helical structures of amylopectin. At ambient temperatures these crystalline regions do not allow water to enter. Heat causes such regions to become diffuse, the amylose chains begin to dissolve, to separate into an amorphous form and the number and size of crystalline regions decreases. Under the microscope in polarized light starch loses its birefringence and its extinction cross. Amylose Leaching: Penetration of water thus increases the randomness in the starch granule structure, and causes swelling; eventually amylose molecules leach into the surrounding water and the granule structure disintegrates. The gelatinization temperature of starch depends upon plant type and the amount of water present, pH, types and concentration of salt, sugar, fat and protein in the recipe, as well as starch derivatisation technology are used. Some types of unmodified native starches start swelling at 55 °C, other types at 85 °C. The gelatinization temperature of modified starch depends on, for example, the degree of cross-linking, acid treatment, or acetylation. Gel temperature can also be modified by genetic manipulation of starch synthase genes. Gelatinization temperature also depends on the amount of damaged starch granules; these will swell faster. Damaged starch can be"