The following abstracts are samples from a portfolio of published book chapters, articles, and papers. In many cases full text versions and supplemental materials are available at harvard.academia.edu//kleelerner
After a disaster the role of the incident commanders and other decision makers, must eventually must shift from assisting coordination of emergency search, rescue, and relief operations to positioning resources and preparing personnel for integrated recovery operations. more
Analysis by specialists (medical, government, security personnel, etc.) enhances situational awareness, reduces situational analysis uncertainties, boosts the capacity to act and react with what Howitt and Leonard describe as situational anticipation, and enables crisis mangers to more quickly recognize and respond to novelty in crisis situations. more
In April 2010, a oil well blowout in the Gulf of Mexico off the coast of Louisiana, caused an explosion on the Deepwater Horizon offshore oil drilling rig operated by BP (formerly British Petroleum) and a vast oil spill into the Gulf waters that lasted for 87 days before being capped. The explosion killed 11 workers and injured 17. The incident is commonly called the Deepwater Horizon oil spill, the BP oil spill, the Gulf of Mexico oil spill, and the Macondo blowout. Within 24 hours, the Coast Guard determined the incident had the potential to become a major environment disaster for the United States. more
Lerner, K. Lee. Radiation exposure in Lerner B.W., Lerner K. Lee, eds. Worldmark Global Health and Medicine Issues (WMGH). Cengage, 2015
Radiation exposure occurs any time that electromagnetic rays or fast-moving particles interacts with living tissue. Ionizing radiation is particularly damaging to tissue; examples include x rays, gamma radiation, and fast-moving subatomic particles such as neutrons. Biological damage caused by exposure to ionizing ranges from mild tissue burns to cancer, genetic damage, and ultimately, death.
While radiation in the form of heat, visible light, and even ultraviolet light is essential to life, the word "radiation" is often used to refer only to those emissions which can damage or kill living things. Such harm is specifically attributed to radioactive particles as well as the electromagnetic rays with frequencies higher than visible light (ultraviolet, x-rays, gamma rays). Harmful electromagnetic radiation is also known as ionizing radiation because it strips atoms of one or more of their electrons, leaving highly reactive ions called free radicals which can damage tissue or genetic material.
There are, however, potential benefits of controlled exposures to certain kinds of radiation, which can be used for the detection, diagnosis, and treatment of certain diseases. (download to read more)
The advent of molecular technologies and the application of genetic identification in clinical and forensic microbiology have greatly improved the capability of laboratories to detect and identify organisms used in biological weapons. Not only does this ability enhance national defense capabilities, but also the development and administration of countermeasures, including vaccines.
The genetic identification of microorganisms utilizes molecular technologies to evaluate specific regions of the genome and to determine the genus, species, or strain of a microorganism. This work grew out of the similar, highly successful applications in human identification using the same basic techniques. Thus, the genetic identification of microorganisms also has been referred to as microbial fingerprinting, and it is a key way in which bioinformatics can assist in the identification of pathogens….
Genetic technologies are especially useful in the detection of biological weapons. Of particular note is the polymerase chain reaction, or PCR, which uses selected enzymes to make copies of genetic material. If the genetic material is unique to the microorganism (e.g., a gene encoding a toxin), then investigators can use PCR to detect a specific microorganism from among the other organisms present in the sample. Traditional PCR detects RNA at the end point of the process (the plateau stage), however advances in the technology led to real-time PCR detection. This gave scientists the ability to collect data in the exponential growth phase, making DNA and RNA quantitation more efficient and accurate, and facilitated the development of hand-held detectors. Hand-held PCR detectors used by United Nations inspectors in Iraq during their weapons inspections efforts of 2002/2003 were sensitive enough to detect a single living Bacillus anthracis bacterium (the agent of anthrax) in an average kitchen-sized room. (more)
On the heels of a National Academy of Sciences report critical of the FBI's investigation of the 2001 anthrax attacks that claimed five lives, prominent anthrax researchers are preparing to publicly slam, the evidence the bureau replied upon in posthumously blaming the attacks entirely on a civilian researcher in the Department of the Army. The Department of Justice (DOJ) and FBI can also expect continued calls in Congress to establish a 9/11-like commission to investigate the case. more ***
Note: A full text copy of the NAS report mentioned in this article, "Review of the Scientific Approaches Used During the FBI's Investigation of the 2001 Anthrax Letters" is available as a full text download. more
Lerner BW, Lerner KL. SARS and Global Public Health Security. Government Information Quarterly. Elsevier, 2005. (online) Draft Copy. Original version in: Lerner, K. Lee and B. Wilmoth Lerner. Encyclopedia of Espionage, Intelligence, and Security, Thomson Gale. 2004.
Severe Acute Respiratory Syndrome (SARS) was the first emergent and highly transmissible viral disease to appear among humans during the twenty-first century. Caused by a coronavirus (SARS-CoV), SARS is far more lethal than the pandemic 2009 H1N1 influenza (caused by a Type A H1N1 influenza virus). Although less lethal than the H5N1 avian flu virus, the SARS virus is more transmissible among humans than the H5N1 virus.
The first known case of SARS was traced to a November 2002 case in Guangdong province, China. By mid-February 2003, Chinese health officials tracked more than 300 cases, including five deaths in Guangdong province from what was at the time described as an acute respiratory syndrome. Chinese health officials initially remained silent about the outbreak, and no special precautions were taken to limit travel or prevent the spread of the disease. The world health community, therefore, had no chance to institute early testing, isolation, and quarantine measures that might have prevented the subsequent global spread of the disease.
Under a new generation of political leadership, Chinese officials subsequently apologized for a slow and inefficient response to the SARS outbreak. Allegations that officials covered up the extent of the spread of the disease caused the dismissal of several local administrators, including China's public health minister and the mayor of Beijing.
In many regards, the SARS outbreak revealed what was effective in terms of public health responses, readiness, and resources. The outbreak also spurred reforms in the International Health Regulations (IHR) designed to increase both surveillance and reporting of infectious diseases and to enhance cooperation in preventing the international spread of disease. (more)
Lerner KL. North Korean Nuclear and Missile Programs. Government Information Quarterly. Elsevier, 2005. (online) DRAFT COPY. Originally: Lerner, K. Lee and B. Wilmoth Lerner. North Korean Nuclear Program. Encyclopedia of Espionage, Intelligence, and Security, Thomson Gale. 2004.
The government of the Democratic People's Republic of Korea (DPRK, also commonly known as North Korea) is a strict and isolationist dictatorship ruled by Kim Jong-un (1984-2013). Despite decades of international diplomatic efforts, superior U.S. military capacity, United Nations prohibitions, and attempts at both international aid and sanctions aimed at eliminating its nuclear and missile programs, North Korea continues to develop increasingly sophisticated nuclear weapons and higher capacity missiles. (more)
Lerner KL. Iranian Nuclear and Missile Programs. Government Information Quarterly. Elsevier, 2005. (online) DRAFT COPY. Originally: Lerner, K. Lee and B. Wilmoth Lerner. Iranian Nuclear Program. Encyclopedia of Espionage, Intelligence, and Security, Thomson Gale. 2003.
Iran's first nuclear technology was obtained as a gift from the United States under the Atoms for Peace program begun by President Dwight Eisenhower in 1953. Although intended to produce a source of power for energy and non-military uses, the technologies required to produce nuclear power and nuclear weapons largely overlap. For decades, there has been speculation about whether Iran is trying to build nuclear weapons. The building blocks are clearly in place, but intelligence agencies in the United States, France, Germany, Israel, and the United Kingdom vary in their estimates about how long it could take Iran to put the pieces together to produce a nuclear weapon.
Both peaceful and military uses require enrichment technology and procedures that extract and concentrate uranium-235 (235U), an isotope capable of sustaining a nuclear chain reaction, from raw uranium ore that contains mostly 99 percent uranium-238 (238U), an isotope incapable of sustaining the chain reaction needed to produce a nuclear explosion. The percentage of enrichment required for use in weapons is much higher than the levels needed to produce nuclear reactor fuel.
The Atoms for Peace program eventually came to be seen as a mistake by the United States, which has sought to recover the nuclear fuel dispersed around the world by the program. It has not always been able to do so because of political change.
By 1979, when the United States-backed dictator of Iran, the Shah Mohammed Reza Pahlavi (1919-1980), was overthrown by fundamentalist Islamist revolutionaries, Iran already had a sophisticated nuclear program. The existing technology was inherited by the new regime.
Iran has consistently insisted that its nuclear facilities and activities are intended only for the peaceful production of nuclear energy. In 2002, however, Iranian dissidents publicized the existence of secret nuclear facilities they contended were part of secret Iranian program to produce nuclear weapons. The United Nations' International Atomic Energy Agency (IAEA) began inspections of Iran's facilities later that year. (more)
Weapon-grade (or "bomb-grade") uranium or plutonium is any alloy or oxide compound that contains enough of certain isotopes of these elements to serve as the active ingredient in a nuclear weapon. Some civilian weapon-grade materials are tracked by international organizations, especially the United Nations' International Atomic Energy Agency (IAEA) and the European Atomic Energy Community (EURATOM), to prevent diversion to bombs. The goal is to prevent nuclear proliferation, that is, the possession of nuclear weapons by unauthorized nations and/or groups.
Those states that already had nuclear weapons at the time of the treaty's creation—the U.S., United Kingdom, France, Russia, and China—are not subject to IAEA safeguards. Only four states—Cuba, India, Israel, and Pakistan—have not signed the NPT and are not part of any international safeguard system. Of these four, India and Pakistan have nuclear weapons and Israel is widely assumed to have a nuclear weapons.
The IAEA tracks weapon-grade materials (or, in the case of plutonium, dilute materials that could be refined to weapons grade) in non-military nuclear fuel cycles in states that are signatories to the Non-Proliferation Treaty (NPT) of 1968.
EURATOM safeguards civil plutonium and uranium in the European countries, including materials not covered by mandatory IAEA safeguards under the NPT (i.e., those in the UK and France). The IAEA and EURATOM cooperatively safeguard European materials to avoid redundancy.
Military nuclear materials are tracked only by the governments that own them. Because the tracking techniques employed internally by nuclear- weapons states vary from nation to nation and are always partly or wholly secret. (more)
In the aftermath of the September 11, 2001, terrorist attacks on the United States and the subsequent war against the Taleban and al-Qaeda in Afganistan, United States leaders turned their attention toward Iraq, specifically its dictatorial leader, Saddam Hussein. Although Iraq was not as powerful a military threat as it was during the Persian Gulf War of 1990-1991, U.S. officials asserted that Iraq's proven development and use of weapons of mass destruction made Iraq a potential source of those weapons for terrorists who could then use them against U.S. or other Western targets.
During the 1980s Iran-Iraq War, Hussein ordered the use of chemical weapons against Iranian forces, and additionally used chemical weapons against civilians in rebellious areas of Iraq.
After Iraqi forces were expelled by U.S. led western coalition forces during the Persian Gulf War, as a part of the agreements that prevented the occupation of Iraq and allowed Hussein to remain in power, Hussein agreed to destroy all weapons of mass destruction and forsake the future development of nuclear, biological, and chemical weapons.
Over a period of twelve years, 17 specific United Nations Security Council resolutions, weapons inspection programs, and economic sanctions against Iraq failed to secure Hussein's full compliance with U.N. resolutions and assure the international community that Iraq had indeed disposed of weapons of mass destruction and abandoned programs to develop new weapons of mass destruction.
Hussein, in an effort to bolster his strong-man image that helped maintain his power in Iraq and influence in the region, played cat and mouse with international inspection teams. Fearing it would make him weak and vulnerable, Hussein refused to give up the appearance that his regime still might control weapons of mass destruction.
Despite U.N. resolutions, in 1998 Iraq expelled U.N. weapons inspectors and no meaningful inspections took place between 1998 and 2002.
Hussein's obstruction, pretense, and posturing resulted in highly polarized Western intelligence assessments of his warfare capacity and willingness to use WMD's. Especially in light of the barrage of bellicose threats issued by Hussein and his official spokesman, intelligence agencies in the West scrambled to make an accurate assessment of Hussein's warfare --and specifically WMD -- capacity.
Iraqi defiance of U.N. resolutions continued throughout the 1990s. Confounding the threats from Hussein was the fact that while older weapons were subsequently discovered and destroyed by U.N. inspection teams in Iraq, there was no direct evidence -- and only weak or conflicting evidence -- that Hussein's threats were backed by the acquisition or development of either replacement or new weapons of mass destruction.
Hussein played a dangerous bluff -- bet on the lives of the Iraqi people -- that was ultimately called when the United States invaded and deposed him from power.
As of July 2003, no new weapons of mass destruction -- or significant infrastructure to indicate programs to build same -- had been found by U.S. or other Coalition forces in control of Iraq. By the end of May 2003, both British and American intelligence agencies began to downplay the possibility of finding large stores of such weapons. Although both U.S. and British officials continued to assert prior claims about the extent of Iraq's arsenal, questions arose as to whether the weapons had been removed, destroyed, or whether intelligence reports regarding the weapons had been mishandled, exaggerated, or falsified.
Although some seized upon the growing controversy regarding the lack of WMD finds as a partisan political issue, the record was clear that all Western intelligence agencies, including those of war dissenter nations France and Germany, agreed before the war that Hussein's regime possessed weapons of mass destruction.
At the end of July 2003, several inquires were underway into the formulation and use by Coalition governments of intelligence related to Iraqi possession development of weapons of mass destruction.
Author's note: This article contains two Addendum sections:
Addendum I: U.K. Prime Minister Tony Blair's "Iraq War speech" to Parliament that resulted in the government voting to use 'all means necessary to ensure the disarmament of Iraq's weapons of mass destruction" on March 18, 2003.
ADDENDUM II: A brief overview of the immediate aftermath of the invasion of Iraq. (Author's note subsequently edited in August 2003 to be a separate article in EEIS.) (continued... read more)
Lerner KL. Weapons of Mass Destruction: Detection Methods. Government Information Quarterly. Elsevier, 2005. (online) Draft Copy. Originally: Lerner, K. Lee and BW Lerner. Weapons of Mass Destruction: Detection Methods. Encyclopedia of Espionage, Intelligence, and Security, Thomson Gale,. 2003.
Weapons of mass destruction (WMD), nuclear, chemical, and biological weapons are commonly detected by monitoring an array of activities common to their development and testing. Lack of access to weapons production facilities, which in the context of both state level and terrorist activities, offer special testing challenges and create the need for sophisticated monitoring protocols as well as the capacity to detect weapons without access to suspect sites, while in transit, and/or while in component pre-assembly phases of development. WMD detection techniques and devices span and array of technologies. In the early 2000s, detection technology included devices like the Handheld Advanced Nucleic Acid Analyzer (HANAA) and techniques ranging from standard forensic laboratory testing to Matrix-Assisted Laser Desorption/Ionization Mass Spectroscopy (MALDI-MS). Autonomous Pathogen Detection System (APDS) and other deployable devices allowed rapid identification of biologic agents while portable devices were available that could identify chemicals in a vapor within minutes under challenging conditions. The genetic detection of biological agents is increasingly exquisite. Gene probe sensors can detect and identify bacteria based upon the presence of a stretch of genetic material that is unique to the microorganism. (more)
The KGB (Komitet Gosudarstvennoi Bezopasnosti or Committee of State Security) was the preeminent Soviet intelligence agency and Soviet equivalent of the American CIA. During the later Soviet period, the KGB served as organization primarily responsible intelligence and counterintelligence matters. Although the NKVD was tasked with internal security, the KBG role in political security and counterintelligence was so broad that its operations often touched on internal security matters. Even Soviet border guards were eventually placed under KGB supervision.
The head of the KGB enjoyed an important position in the totalitarian regime hierarchy. In 1967, Andropov, then head of KGB and later Soviet premier, described the role of the KGB and other state security bodes as engaged in "a bitter and stubborn battle on all fronts, economic, political, and ideological.
The KGB and Western intelligence services played a continual deadly game of "cat and mouse" (both as pursuers and the pursued) throughout the Cold War. KGB officers and operatives played an important role in the attempt to overthrow the government of the first (and last) president of the USSR, Mikhail Gorbachev and was essentially abolished or devolved into successor agencies after the failure of the anti-Gorbachev putsch and the collapse of the USSR in 1991.
The KGB's culture continue to heavily influence Russian politics and policy. After the fall of the Soviet Union, former KGB officer Vladimir Vladimirovich Putin, became President of the successor, Russian Federation. Moreover, the following Russian Federation agencies were created from within the KGB: the Federal Security Service (FSB); the Federal Agency of Government Intercommunication, which is responsible for communications between top state officials; the Guard Service, which guards top state officials; and the Outer Intelligence Service, which collects and processes all data coming from abroad.
Some of the bizarre disinformation created by the KGB and regurgitated anti-U.S. critics, still survives as urban myth or folk legend. For example, documents in the KGB archives now provide evidence that KGB operatives mounted a disinformation campaign laden with pseudo scientific "proofs" and language that was designed to influence third-world nations that the United States had deliberately created the AIDS virus in the laboratory to use as a biological weapon. (more)
[Author's note: in 2016 U.S. intelligence agencies, including the CIA, NSA, and DIA united in approving an Intelligence Community Assessment (ICA) concluding that "Russia, like its Soviet predecessor, has a history of conducting covert influence campaigns focused on U.S. presidential elections" including the 2016 U.S. Presidential Campaign. The ICA conclusion -- based on evidence known by 29 December 2016 and offered with generally high confidence was that Russian hacking, along with propaganda and disinformation efforts (including the creation and dissemination of fake news), were undertaken with the direct knowledge and approval of Russian President Vladimir Putin and other senior Russian officials. Read more at https://www.academia.edu/30817272/_The_Bear_Gets_a_BOGO_The_ICA_on_Russian_Meddling_in_the_2016_U.S._Presidential_Election]
The intelligence community of the United Kingdom is both older and more complicated than that of the United States. MI5, or the Security Service, and MI6, the Secret Intelligence Service, are the most well known components of the British intelligence structure, but these are just two parts of a vast intelligence apparatus. Communications intelligence is the responsibility of the Government Communications Headquarters (GCHQ), which works closely with the Communications Electronics Security Group, while a number of agencies manage military intelligence under the aegis of the Ministry of Defense. London's Metropolitan Police, or Scotland Yard, has its own Special Branch concerned with intelligence.
The "MI" by which the two principal British security services are known (MI5, or Security Service, and MI6, or Secret Intelligence Service) refers to their common origins in military intelligence. Both can trace their roots to the Secret Service Bureau, created in 1909 after a report by Parliament's Committee on Imperial Defense concluded that "an extensive system of German espionage exists in this country..." Working with the War Office, Admiralty, and various operatives and agents overseas, the bureau had both a Home Section and a Foreign Section--precursors, respectively, of MI5 and MI6.
Command and control operates through no less than four entities: the Central Intelligence Machinery, the Ministerial Committee on the Intelligence Services, the Permanent Secretaries' Committee on the Intelligence Services, and the Joint Intelligence Committee. (more)
Lerner KL. Applications of Number Theory in Cryptography. Government Information Quarterly. Elsevier, 2005. Draft Copy. Originally: Lerner, K. Lee and BW Lerner, Applications of Number Theory in Cryptography. Encyclopedia of Espionage, Intelligence, and Security, Thomson Gale,. 2003.
Cryptography is a division of applied mathematics concerned with developing schemes and formula to enhance the privacy of communications through the use of codes. Cryptography allows its users, whether governments, military, businesses or individuals, to maintain privacy and confidentiality in their communications. The goal of every cryptographic scheme is to be crack proof (i.e., only able to be decoded and understood by authorized recipients). Cryptography is also a means to ensure the integrity and preservation of data from tampering. Modern cryptographic systems rely on functions associated with advanced mathematics, including a specialized branch of mathematics termed number theory that explores the properties of numbers and the relationships between numbers. (more)
Continental drift, in the context of the modern theory of plate tectonics, is explained by the movement of lithospheric plates over the asthenosphere (the molten, ductile, upper portion of the Earth's mantle). Precisely used, the term "continental drift" is actually rooted in antiquated concepts regarding the structure of the Earth. Today, geophysicists and geologists explain the movement or drift of the continents within the context of plate tectonic theory. The visible continents, a part of the lithospheric plates upon which they ride, shift slowly over time as a result of the forces driving plate tectonics. Moreover, plate tectonic theory is so robust in its ability to explain and predict geological processes that it is equivalent in many regards to the fundamental and unifying principles of evolution in biology, and nucleosynthesis in physics and chemistry. (more)
The disastrous effects of Lysenkoism -- a term used to describe the impact of Trofim Denisovich Lysenko's influence upon science and agriculture in the Soviet Union during the first half of the 20th century -- darkly illustrates the perils of intruding politics and ideology into the affairs of science.
Despite the near medieval conditions in which the majority of the population of Czarist Russia lived, the achievements of pre-Revolutionary Russia in science rivaled those of Europe and America. In fact, achievement in science had been one of the few avenues to the aristocracy open to the non-nobility. The Revolution had sought to maintain this tradition, and win over the leaders of Russian science. From outset new communist leaders Vladimir Lenin and Leon Trotsky fought -- even in the midst of civil war and famine -- to make available considerable resources for scientific research.
In the political storms that ravaged the Soviet Union following the death of Lenin, the expulsion of Trotsky, and the rise of Soviet dictator Joseph Stalin, Lysenko's pseudoscientific ideas that all organisms, given the proper conditions, have the capacity to be or do anything had certain attractive parallels with the social philosophies of Karl Marx (and the 20th century French philosopher Henri Bergson) that promoted the idea that man was largely a product of his own will.
Beyond the absurdity and tragedy of rejecting of nearly a century of advancements in genetics, Stalin and Lysenko combined to exacerbate famine and other deprivations facing Soviet citizens. Moreover, the culture of Lysenkoism was another facet of repression and persecution. Such was the fate of scientists who dared oppose Lysenko's Stalin-backed doctrines.
Enamored with the political acceptability and alleged scientific merit of Lysenko's ideas, Stalin took matters one step further by personally attacking modern genetics as counter-revolutionary or bourgeois science. While the rest of the scientific world could not conceive of understanding evolution without genetics, Stalin's Soviet Union used its political power to suppress rational scientific inquiry. Under Stalin, science was made to serve political ideology. (MORE)
Evolution is the process of biological change over time. Such changes, especially at the genetic level are accomplished by a complex set of evolutionary mechanisms that act to increase or decrease genetic variation.
Evolutionary theory is the cornerstone of modern biology, and unites all the fields of biology under one theoretical umbrella to explain the changes in any given gene pool of a population over time. Biological evolutionary theory is compatible with nucelosynthesis (the evolution of the elements) and current cosmological theories in physics regarding the origin and evolution of the universe.
There is no currently accepted scientific data that is incompatible with the general postulates of evolutionary theory, and the mechanisms of evolution. Moreover, there is an abundance of observational and experimental data to support the theory and its subtle variations…
Evolution requires genetic variation, and these variations or changes (mutations) can be beneficial, neutral or deleterious. In general, there are two major types of evolutionary mechanisms, those that act to increase genetic variation, and mechanisms that operate to decrease genetic variation.
Mechanisms that increase genetic variation include mutation, recombination and gene flow….
...In contrast to mechanisms that operate to increase genetic variation, there are fewer mechanisms that operate to decrease genetic variation. Mechanisms that decrease genetic variation include genetic drift and natural selection. (more).
Lerner KL. Quasars: Beacons in the Cosmic Night. DRAFT COPY subsequently published in Science and Its Times: Understanding the Social Significance of Scientific Discovery. Thomson Gale. 2001, This draft updated . 2010.
The term quasar is used to describe quasi-stellar radio sources that are the most distant, energetic objects ever observed. Quasars are enigmatic. Despite their great distance from Earth, some are actually brighter than hundreds of galaxies combined, yet are physically smaller in size than our own solar system. Astronomers calculate that the first quasar identified, 3C273 (3rd Cambridge catalog, 273rd radio source) located in the constellation Virgo, is moving at the incredible speed of one-tenth the speed of light and, although dim to optical astronomers, is actually five trillion times as bright as the Sun. Many astronomers theorize that very distant quasars represent the earliest stages of galactic evolution. The observations and interpretation of quasars remain controversial and challenge many theories regarding the origin and age of the Universe. In particular, studies of the evolution and distribution of quasars boosted acceptance of Big Bang-based models of cosmology (i.e., theories concerning the creation of the Universe) over other scientific and philosophical arguments that relied on steady-state models of the Universe. (more)
Nucleosynthesis is the process of building nuclei of atoms heavier than hydrogen. The Big Bang produced hydrogen, helium, and some lithium, but all later creation of higher weight atoms has occurred in the hearts of stars via nucleosynthesis. All elements heavier than hydrogen of which Earth and humans are made were forged in stellar interiors by nucleosynthesis.
Until the second half of the nineteenth century, astronomy was principally concerned with accurately describing the movements of planets and stars. Developments in the electromagnetic theory of light in the late nineteenth century along with the articulation of quantum and relativity theories in the early twentieth century, however, gave astronomers the tools they needed to probe the inner workings of the sun and other stars. In the first two-thirds of the century, astronomers and physicists unraveled the life cycles of most types of stars and reconciled the predictions of physical theory with astronomical observation. Insights into the birth and death of stars led to the stunning conclusion that Earth and all life upon it, including human beings, are in a direct and physical sense a product of stellar evolution. In astronomy, the term "evolution" is used to name the orderly process by which individual stars change as they age: stellar evolution is unrelated to biological evolution. more
Quantum electrodynamics (QED), is a scientific theory that is also known as the quantum theory of light. QED describes the quantum properties (properties that are conserved and that occur in discrete amounts called quanta) and mechanics associated with the interaction of light (i.e., electromagnetic radiation) with matter. The practical value of QED rests upon its ability, as set of equations, to allow calculations related to the absorption and emission of light by atoms and to allow scientists to make very accurate predictions regarding the result of the interactions between photons and charged atomic particles such as electrons. QED is a fundamentally important scientific theory because it accounts for all observed physical phenomena except those associated with aspects of relativity theory and radioactive decay. (more)
Quantum mechanics describes the relationships between energy and matter on the atomic and subatomic scale. At the beginning of the 20th century, German physicist Maxwell Planck proposed that atoms absorb or emit electromagnetic radiation in bundles of energy termed quanta. This quantum concept seemed counter-intuitive to well-established Newtonian physics. Advancements associated with quantum mechanics (e.g., the uncertainty principle) also had profound implications with regard to the philosophical scientific arguments regarding the limitations of human knowledge…. Later in the 1920s, the concept of quantization and its application to physical phenomena was further advanced by more mathematically complex models based on the work of the French physicist Louis Victor de Broglie and Austrian physicist Erwin Schrödinger that depicted the particle and wave nature of electrons. De Broglie showed that the electron was not merely a particle but a wave form. This proposal led Schrodinger to publish his wave equation in 1926. Schroödinger's work described electrons as "standing wave" surrounding the nucleus and his system of quantum mechanics is called wave mechanics. German physicist Max Bornand English physicist P.A.M Dirac made further advances in defining the subatomic particles (principally the electron) as a wave rather than as a particle and in reconciling portions of quantum theory with relativity theory. more
Lerner KL. ** The Bohr Model of the Atom. Draft Copy. Part of a series of essays identifying and explaining theories essential to understanding modern scientific thought. Updated: 2010, 2012. Originally Published. 1999.
The Bohr model of atomic structure was developed by Danish physicist and Nobel laureate Niels Bohr (1885-1962). Published in 1913, Bohr's model improved the classical atomic models of physicists J. J. Thomson and Ernest Rutherford by incorporating quantum theory. While working on his doctoral dissertation at Copenhagen University, Bohr studied physicist Max Planck's quantum theory of radiation. After graduation, Bohr worked in England with Thomson and subsequently with Rutherford. During this time Bohr developed his model of atomic structure. (more)
At the dawn of the twentieth century the classical laws of physics put forth by Sir Isaac Newton (1642-1727) in the late seventeenth century stood venerated and triumphant. The laws described with great accuracy the phenomena of everyday existence. A key assumption of Newtonian laws was a reliance upon an absolute frame of reference for natural phenomena. As a consequence of this assumption, scientists searched for an elusive "ether" through which light waves could pass. In one grand and sweeping "theory of special relativity," Albert Einstein was able to account for the seemingly conflicting and counter-intuitive predictions stemming from work in electromagnetic radiation, experimental determinations of the constancy of the speed of light, length contraction, time dilation, and mass enlargements. A decade later, Einstein once again revolutionized concepts of space and time with the publication of his "general theory of relativity." (more)
Advances in 19th century concepts of electromagnetism moved rapidly from experimental novelties to prominent and practical applications. At the start of the century gas and oil lamps burned in homes, but by the end of the century electric light bulbs illuminated an increasing number of electrified homes. By mid-century (1865) a telegraph cable connected the United States and England. Yet, within a few decades, even this magnificent technological achievement was eclipsed by advancements in electromagnetic theory that spurred the discovery and development of the radio waves that sparked a 20th century communications revolution. So rapid were the advances in electromagnetism that by the end of the 19th century high-energy electromagnetic radiation in the form of x-rays was used to diagnose injury. The mathematical unification of 19th century experimental work in electromagnetism profoundly shaped the relativity and quantum theories of 20th century physics.
In the late 18th and 19th centuries philosophical and religious ideas led many scientists to accept the argument that seemingly separate forces of nature (e.g., electricity, magnetism, light, etc.) shared a common and fundamental source. In addition, profound philosophical and scientific questions posed by Issac Newton's Optics (published in 1704) regarding the nature of light still dominated the 19th century intellectual landscape. Accordingly, in addition to a search for a common source of all natural phenomena, an elusive "ether" through which light could pass was thought necessary to explain the wave-like behavior of light.
The discovery of the relationship between electricity and magnetism at the end of the 18th century and the beginning of the 19th century was hampered by a rift in the descriptions and models of nature used by mathematicians and experimentalists. To a significant extent, advances in electromagnetic theory during the 19th century mirrored unification of these approaches. The culmination of this merger coming with Scottish physicist James Clerk Maxwell's (1831-1879) development of a set of equations that accurately described electromagnetic phenomena better than any previous non-mathematical model.
The development of Maxwell's equations embodied the mathematical genius of the German mathematician Carl Fredrich Gauss (1777-1855), the reasonings and laboratory work of French scientist Andre Marie Ampere (1775-1836), the observations of Danish scientist Hans Christian Oersted (1777-1851), and a wealth of experimental evidence provided by English physicist and chemist Micheal Faraday (1791-1867). (more)
Mathematics is the study of the relationships among, and operations performed on both tangible and abstract quantities. In its ancient origins, mathematics was concerned with magnitudes, geometries and other practical and measurable phenomena. During the 19th century, mathematics, and an increasing number of mathematicians, became enticed with relationships based on pure reason and upon the abstract ideas and deductions properly drawn from those relationships. In addition to advancing mathematical methods related to applications useful to science, engineering or economics (hence the term applied mathematics), the rise of the formalization of symbolic logic and abstract reasoning during the 19th century allowed mathematicians to develop the definitions, complex relations, and theorems of pure mathematics. Within both pure and applied mathematics, 19th century mathematicians took on increasingly specialized roles corresponding to the rapid compartmentalization and specialization of mathematics in general. more
At the dawn of the 18th century scientific and Western theology was based on the concept of an unchanging, immutable God ruling a static universe. For theologians, Newtonian physics and the rise of mechanistic explanations of the natural world held forth the promise of a deeper understanding of the inner workings of the Cosmos and, accordingly, of the nature of God. During the course of the 18th century, however, there was a major conceptual rift between science and theology that was reflected in a growing scientific disregard for understanding based upon divine revelation and growing acceptance of an understanding of Nature based upon natural theology. By the end of the 18th century, experimentation had replaced scripture as the determinant authority in science. Enlightenment thinking, spurred by advances in the physical sciences, sent sweeping changes across the political and social landscape.
Throughout the 18th century English physicist Sir Isaac Newton's (1642-1727) Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) first published in 1687, dominated the intellectual landscape. Moreover, Newton actively wrote and modified his observations during the first quarter of the 18th century. In addition to the elaboration of physics and calculus, however, Newton also concerned himself with the relationship between science and theology. Without question, Newton was the culminating figure in the Scientific Revolution of the 16th and 17th centuries and the leading articulator of the mechanistic vision of the physical world initially put forth by French mathematician Rene Descartes (1596-1650). Within his own lifetime Newton saw the rise and triumph of Newtonian physics, and the widespread acceptance of a mechanistic concept regarding the workings of the universe among philosophers and scientists.
Newtonian laws -- and a well-functioning clockwork universe -- depended upon the deterministic effects of gravity, electricity, and magnetism. In such a universe matter was passive, moved about and controlled by "active principles". For Newton, who rejected the mainstream Trinitarian concepts of Christianity, the order and beauty found in the universe, was God. Newton argued that God set the Cosmos in motion, and to account for small differences between predicted and observed results, God actively intervened from time to time to reset or "restore" the mechanism.
Theologians and scientists were deeply concerned about the moral implications of a scientific theories that explained everything as the inevitable consequence of mechanical principles. Accordingly, much effort was expended to reconcile Newtonian physics -- and a clockwork universe -- with conventional theology to provide an on-going and active role for God. Objective evidence regarding the universe was often sifted through theological filters that evaluated whether a set of facts of theories tended to prove or disprove the existence of God. Ironically, it was this interplay between religion and science that led many to subsequently insist on a strong scientific objectivity that largely discounted religious subjectivity. more
Using equations based on Newton's laws, 18th century mathematicians were able to develop the symbolism and formulae needed to advance the study of dynamics (the study of motion). An important consequence of these advancements allowed astronomers and mathematicians to more accurately and precisely calculate and describe the real and apparent motions of astronomical bodies (celestial mechanics) as well as to propose the dynamics related to the formation of the solar system. The refined analysis of celestial mechanics carried profound theological and philosophical ramifications in the Age of Enlightenment. Mathematicians and scientists, particularly those associated with French schools of mathematics, argued that if the small perturbations and anomalies in celestial motions could be completely explained by an improved understanding of celestial mechanics, i.e., that the solar system was really stable within defined limits, such a finding mooted the concept of a God required adjust the celestial mechanism. more
The determination of a precise value for the gravitational constant (G) proved a frustrating, but fruitful, exercise for scientists since the constant was first described by English physicist Sir Isaac Newton (1642-1727) in his influential 1687 work, Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). In many ways as enigmatic as mathematicians' search for a proof to Fermat's last theorem (proved only in the last decade of the twentieth century), the determination of an exact value of the gravitational constant has eluded physicists for more than 300 years. The quest for "G" provides a continuing challenge to the experimental ingenuity of physicists and often spurs new generations of physicists to recapture the inventiveness and delicacy of measurement first embodied in the elegant experiments conducted by English physicist Henry Cavendish (1731-1810).
In Principia Newton put forth a grand synthesis of theory regarding the physical universe. According to Newtonian theory, the universe was bound together by the mutual gravitational attraction of its constituent particles. With regard to gravity, Newton formulated that the gravitational attraction between two bodies was directly proportional to the masses, and inversely proportional to the square of the distance between the masses. Accordingly, if one doubled a mass, one would double the gravitational attraction; if one doubled the distance between masses, one would reduce the gravitational attraction to one-fourth of its former value. What was missing from Newton's formulation, however, was a value for a gravitational constant that would accurately translate these fundamental qualitative relationships into experimentally verifiable numbers. more
In 1687 English physicist Sir Isaac Newton (1642-1727) published a law of universal gravitation in his influential work Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy). In its simplest form, Newton's law of universal gravitation states that bodies with mass attract each other with a force that varies directly as the product of their masses and inversely as the square of the distance between them. This mathematically elegant law, however, offered a remarkably reasoned and profound insight into the mechanics of the natural world because revealed a cosmos bound together by the mutual gravitational attraction of its constituent particles. Moreover, along with Newton's laws of motion, the law of universal gravitation became the guiding model for the future development of physical law.
Newton's law of universal gravitation was derived from German mathematician and astronomer Johannes Kepler's (1571-1630) laws of planetary motion, the concept of "action-at-a-distance," and Newton's own laws of motion. Building on Galileo's observations of falling bodies, Newton asserted that gravity is a universal property of all matter. Although the force of gravity can become infinitesimally small at increasing distances between bodies, all bodies of mass exert gravitational force on each other. Newton extrapolated that the force of gravity (later characterized by the gravitational field) extended to infinity and, in so doing, bound the universe together. more
Many of the most influential advances in mathematics during the 18th century involved the elaboration of the calculus, a branch of mathematical analysis which describes properties of functions (curves) associated with a limit process. Although the evolution of the techniques included in the calculus spanned the history of mathematics, calculus was formally developed during the last decades of the 17th century by English mathematician and physicist Sir Isaac Newton (1643-1727) and, independently, by German mathematician Gottfried Wilhelm von Leibniz (1646-1716). Although the logical underpinnings of calculus were hotly debated, the techniques of calculus were immediately applied to a variety of problems in physics, astronomy, and engineering. By the end of the 18th century, calculus had proved a powerful tool that allowed mathematicians and scientists to construct accurate mathematical models of physical phenomena ranging from orbital mechanics to particle dynamics.
Although it is clear that Newton made his discoveries regarding calculus years before Leibniz, most historians of mathematics assert that Leibniz independently developed the techniques, symbolism, and nomenclature reflected in his preemptory publications of the calculus in 1684 and 1686. The controversy regarding credit for the origin of calculus quickly became more than a simple dispute between mathematicians. Supporters of Newton and Leibniz often arguing along bitter and blatantly nationalistic lines and the feud itself had a profound influence on the subsequent development of calculus and other branches of mathematical analysis in England and in Continental Europe. more
The Calculus describes a set of powerful analytical techniques, including differentiation and integration, that utilize the concept of a limit in the mathematical description of the properties of functions, especially curves. The formal development of the calculus in the later half of the 17th century, primarily through the independent work of English physicist and mathematician Sir Isaac Newton (1642-1727) and German mathematician Gottfried Wilhelm Leibniz (1646-1716), was the crowning mathematical achievement of the Scientific Revolution. The subsequent advancement of the calculus profoundly influenced the course and scope of mathematical and scientific inquiry.
Important mathematical developments that laid the foundation for the calculus of Newton and Leibniz can be traced back to mathematical techniques first advanced in Ancient Greece and Rome. In addition to existing methods to determine the tangent to a circle, the Greek mathematician and inventor Archimedes (c.290-c.211B.C.), developed a technique to determine the tangent to a spiral, an important component of his water screw.
The majority of other ancient fundamental advances ultimately related to the calculus were concerned with techniques that allowed the determination of areas under curves (principally the area and volume of curved shapes). In addition to their mathematical utility, these advancements both reflected and challenged prevailing philosophical notions regarding the concept of infinitely divisible time and space. Two centuries before the work of Archimedes, Greek philosopher and mathematician Zeno of Elea (c.495-c.430 B.C.) constructed a set of paradoxes that were fundamentally important in the development of mathematics, logic and scientific thought. Zeno's paradoxes reflected the idea that space and time could be infinitely subdivided into smaller and smaller portions and these paradoxes remained mathematically unsolvable in practical terms until the concepts of continuity and limits were introduced. more
During the Renaissance in Western Europe, a rediscovery and advancement of classical mathematics laid the foundation for the empiricism of the Science Revolution. One of the pillars of this intellectual reawakening in mathematics was the increased use of mathematical symbols that enabled scholars to more easily and accurately communicate with each other across geographical, national, and linguistic boarders. more
During the European Dark Ages there was no coherent system of scientific or philosophical thought. Throughout Western Civilization, theological doctrine and dogma replaced the rational and logical inquiry of the ancient Greek scholars. During the 13th and 14th centuries, however, the rediscovery of Aristotle's (384-322 B.C.) philosophy, as preserved by Arabic scholars, renewed interest in the development of logic and scientific inquiry. The critical writings of St. Thomas Aquinas (1227-1274), Roger Bacon (1214-1294) and William Ockham (also spelled Occham, 1285-1347/49) regarding Aristotelian ideas ultimately laid the intellectual foundations for the 17th century Scientific Revolution by de-emphasizing the primacy of understanding based upon scriptural revelation or authority.
Although the origins of astronomy and cosmology predate the human written record, by the height of ancient Greek civilization the cause of natural phenomena was attributed to the collective whim of a pantheon of Gods. Although monotheistic in the same sense as was ancient Judaism, out of this pantheism (a theology that includes multiple Gods) arose the idea that there was an infinite being, Plato's (c. 428 - c.347 B.C.), "The One," and Aristotle's (384-322 B.C.) "prime mover.' Aristotle's influence over astronomy and cosmology was to extend for nearly two millennia and, as a set of philosophical and scientific explanations of the universe, Aristotle's assertions ultimately became integral to the tightly interwoven fabric of philosophy, science, and theology that came to dominate the late Medieval intellectual landscape. more
Of increasing importance in ancient and classical civilizations that had their territorial and cultural horizons consistently expanded by the march of armies and the alluring promise of wealth and trade, was the measurement of distance. Using elegant mathematical reasoning and limited empirical measurement, in approximately 240 B.C., Eratosthenes of Cyrene (now located in Libya) made an accurate measurement of the circumference of the Earth. In addition to providing evidence of scientific empiricism in the ancient world, this and other contributions to geodesy (the study of the shape and size of the Earth) spurred subsequent exploration and expansion. Ironically, centuries later the Greek mathematician and astronomer Claudius Ptolemy's erroneous rejection of Eratosthenes' mathematical calculations, along with other mathematical errors, resulted in the mathematical estimation of a smaller Earth that, however erroneous, made extended seagoing journeys and exploration seem more tactically achievable.
Throughout the course of human history, science and society have advanced in a dynamic and mutual embrace. Regardless of scholarly contentions regarding an exact definition of science, the history of science in the ancient world is a record of the first tentative steps toward a systemization of knowledge concerning the natural world. During the period 2000 B.C. to 699 A.D., as society became increasingly centered around stabilizing agricultural communities and cites of trade, the development of science nurtured necessary practical technological innovations and, at the same time, spurred the first rational explanations of the vastness and complexity of the cosmos.
The archaeological record provides abundant evidence that our most ancient ancestors' struggle for daily survival drove an instinctive need to fashion tools from which they could gain physical advantage beyond the strength of the relatively frail human body. Along with an innate curiosity into the workings and meanings of the celestial panorama that painted the night skies, this visceral quest for survival made more valuable the skills of systematic observation, technological innovation, and a practical understanding of one's surroundings. From these fundamental skills evolved the necessary intellectual tools to do scientific inquiry.
Although the wandering civilizations that predated the earliest settlements were certainly not scientifically or mathematically sophisticated by contemporary standards, their efforts ultimately produced a substantial base of knowledge that was fashioned into the science and philosophy practiced in ancient Babylonia, Egypt, China, and India.
While much of the detail regarding ancient life remains enigmatic, the long-established pattern of human history reveals a reoccurring principle wherein ideas evolve from earlier ideas. In the ancient world, the culmination of the intellectual advances of early man ultimately coalesced in the glorious civilizations of classical Greece and Rome.
In these civilizations, the paths of development for science and society were clearly fused. Plato's attribution to Socrates of the saying, "The unexamined life is not worth living," expresses an early scientific philosophy that calls thinking people to examine, scrutinize, test, and make inquires of the world. This quest for knowledge and for reasoned rational thought provided a tangible base for the development of modern science and society. (more)
The first records of systematic astronomical or astrological observation and interpretation lie in the scattered remains of ancient Egyptian and Babylonian civilizations. The earliest evidence of the development of astronomy and astrology -- in the modern world distinctive representatives of science and pseudo-science -- establish that they share a common origin grounded in mankind's need and quest to understand the movements of the celestial sphere. Moreover, evidence suggests a early and strong desire to relate earthly everyday existence to the stars and to develop a cosmology (an understanding of the origin, structure and evolution of the universe) that bound intimately bound human society to a coherent and knowable universe. (more)
The content of the Moscow and Rhind Papyruses shed considerable light on the nature and extent of ancient Egyptian mathematics. Both papyri provide vivid documentary evidence of geometrical reasoning in the Egyptian Twelfth Dynasty and insight into the practical applications of mathematics prior to the more formal development of mathematical theory in ancient Greece. A careful analysis of the mathematical presentation and content of the two documents, however, limits the claims of Egyptian influence upon the later rise of theory in Greek mathematics.
The physical archaeological record leaves little doubt as to the use and influence of mathematics on ancient Egyptian culture. Temples and other cultural artifacts provide extensive evidence of mathematical reasoning that predates the existing documentary record. The arrangement of pillars and stones in temple monuments, such as those found at Karnak, are lasting tribute to the careful calculations of ancient priests and astronomers in their attempt to provide accurate calendars based upon the movements of the Sun.
Whatever the initial need for a written record, whether its first use was as a more portable means of recording and deciphering astronomical data, or whether the general rise of civilization provided a swelling and multifaceted need to record the methods of mathematical reasoning, the earliest existing documentary records embodied in the Moscow papyrus and the Rhind papyrus, disclose that the ancient Egyptians utilized considerable practical skill in the use and application of mathematics. (more)
Compared to their male counterparts, disproportionately fewer female students go on to take advanced or elective science at the secondary school level. Consequently, disproportionately fewer women pursue university degrees or careers in mathematics, science and engineering. Although more female students are taking elective secondary school math and science courses, these disproportions remain significant. Based on a comprehensive survey of research and field reports, this paper outlines and evaluates the theory and methodology behind attempts -- particularly those emphasizing reading and writing skills -- to meet the specific needs of female students and asserts that addressing gender inequity is a “win-win” for teachers who desire to enhance education for all students. (more)
Compared to their male counterparts, disproportionately fewer female students go on to take advanced or elective science at the secondary school level. Consequently, disproportionately fewer women pursue university degrees or careers in mathematics, science and engineering. Although more female students are taking elective secondary school math and science courses, these disproportions remain significant. Based on a comprehensive survey of research and field reports, this paper outlines and evaluates simple modifications to expository instruction-particularly those emphasizing reading and writing skills-that enable teachers to better meet the specific needs of female students. (more)