The first meeting of the Ames Area Amateur Astronomers (AAAA) took place 40 years ago today: Saturday, June 2, 1979. Central Junior High Earth Science teacher Jack Troeger and Welch Junior High Earth Science Teacher Ron Bredeson held the first meeting in Jack’s classroom in the building that is now the Ames City Hall. This was a great start to a great astronomy club. Here’s to the next 40 years!
And, we just passed another important milestone in AAAA history. The grand opening of the original McFarland Park Observatory took place 35 years ago on Memorial Day, Monday, May 28, 1984. Back then, the pavement ended at the intersection of Dayton Rd. & County Road E-29, northeast of Ames, Iowa, and it was gravel the rest of the way.
The AAAA purchased a backyard-observatory silo-top dome from Glen Hankins in Nevada on Saturday, September 27, 1980, and then-Ranger (and later Story County Conservation Director) Steve Lekwa of McFarland Park was instrumental in allowing the AAAA to build its observatory at its present site at McFarland Park. The much-improved replacement roll-off-roof observatory, named after club members and benefactors Bertrand & Mary Adams, was completed in 2000. The only part of the original observatory structure that remains is the telescope pier!
The first five Iridium satellites were launched on May 5, 1997, and by 2002 there were 66 operational satellites, providing consistent global satellite phone coverage. These satellites have the interesting property that their antenna panels sometimes reflect sunlight down to the Earth’s surface, causing what came to be known as “Iridium flares”, delighting terrestrial observers—myself included. During an Iridium flare event, the satellite suddenly appears and gradually brightens and then dims to invisibility as it moves slowly across a section of sky over several seconds. Many of these events reach negative magnitude, with some getting as bright as magnitude -9.5.
The next generation of Iridium satellites began launching in 2017, but these satellites are constructed in such a way that they do not produce flares. Gradually, the original Iridium satellites are de-orbiting (or being de-orbited), so eventually there will be no more Iridium flares.
The Iridium flares haven’t been much of a nuisance to astronomers because the number of events per night for a given observer have been in the single digits.
But now we’re facing too much of a good thing. The first volley of 60 Starlink satellites was launched on May 24, with 12,000 expected to be in orbit by 2028. These satellites will provide broadband internet service to the entire planet. Though the Starlink satellites aren’t expected to produce spectacular flares like the first generation of the Iridium satellites, they do reflect sunlight as any satellite does, and the sheer number of them in relatively low Earth orbit is sure to cause a lot of headaches for astronomers and stargazers throughout the world.
I estimate that about 468 of the 12,000 satellites will be above your horizon at any given moment, but how many of them will be visible will depend on their altitude (both in terms of distance above the Earth’s surface and degrees above the horizon), and where they are relative to the Earth’s shadow cone (they have to be illuminated by sunlight to be seen).
And Starlink will not be the only swarm of global broadband internet satellites, as other companies and countries plan to fly their own satellite constellations.
This situation illustrates yet another reason why we need a binding set of international laws that apply to all nations and are enforced by a global authority. The sooner we have this the better, as our survival may depend upon it. How else can we effectively confront anthropogenic climate change and the precipitous decline in biodiversity?
As for these swarms of satellites, two requirements are needed now to minimize their impact on astronomy:
Build the satellites with minimally reflective materials and finishes
Fly one internationally-managed robust constellation of global broadband internet satellites, and require competing companies and nations to utilize them, similar to the co-location often required for terrestrial communication towers
I’d like to close this piece with a few questions. Will future “stargazers” go out to watch all the satellites and generally ignore the real stars and constellations because they are too “boring”? Will professional astronomers increasingly have to move their operations off the Earth’s surface to the far side of the Moon and beyond? Will we continue to devalue the natural world and immerse ourselves ever more deeply into our human-invented virtual environments?
Serbian engineer, mathematician, and scientist Milutin Milanković was born 140 years ago on this date in 1879, in the village of Dalj on the border between Croatia and Serbia—then part of the empire of Austria-Hungary. He died in 1958 in Beograd (Belgrade), then in Yugoslavia and today in Serbia, at the age of 79.
Milanković is perhaps most famous for developing a mathematical theory of climate based on changes in the Earth’s orbit and axial orientation. There are three basic parameters that change with time—now known as the Milankovitch cycles—that affect the amount of solar energy the Earth receives and how it is distributed upon the Earth.
I. Orbital eccentricity of the Earth changes with time
The eccentricity (e) tells you how elliptical an orbit is. An eccentricity of 0.000 means the orbit is perfectly circular. A typical comet’s orbit, on the other hand, is very elongated, with an eccentricity of 0.999 not at all uncommon. Right now, the Earth’s orbital eccentricity is 0.017, which means that it is 1.7% closer to the Sun at perihelion than its semimajor axis distance (a), and 1.7% further from the Sun at aphelion than its semimajor axis distance.
The greater the eccentricity the greater the variation in the amount of solar radiation the Earth receives throughout the year. Over a period of roughly 100,000 years, the Earth’s orbital eccentricity changes from close to circular (e = 0.000055) to about e = 0.0679 and back to circular again. At present, the Earth’s orbital eccentricity is 0.017 and decreasing. We now know the Earth’s orbital eccentricity changes with periods of 413,000, 95,000, and 125,000 years, making for a slightly more complicated variation than a simple sinusoid, as shown below.
II. Tilt of the Earth’s axis changes with time
The tilt of the Earth’s polar axis with respect to the plane of the Earth’s orbit around the Sun—called the obliquity to the ecliptic—changes with time. The Earth’s current axial tilt is 23.4°, but it ranges between about 22.1° and 24.5° over a period of about 41,000 years. Greater axial tilt means winter and summer become more extreme. Presently, the axial tilt is decreasing, and will reach a minimum around 11,800 A.D.
III. Orientation of the Earth’s axis changes with time
The Earth’s axis precesses or “wobbles” with a period of around 26,000 years about the north and south ecliptic poles. This changes what latitude of the Earth is most directly facing the Sun when the Earth is closest to the Sun each year. Currently, the southern hemisphere has summer when the Earth is at perihelion.
Milanković used these three cycles to predict climate change. His ideas were largely ignored until 1976, when a paper by James Hays, John Imbrie, and Nicholas Shackleton in the journal Science showed that Milanković’s mathematical model of climate change was able to predict major changes in climate that have occurred during the past 450,000 years.
These Milankovitch cycles are important to our understanding of climate change over much longer periods than the climate change currently being induced by human activity. Note the extremely rapid increase of greenhouse gas concentrations (CO2, CH4, and N2O) in our atmosphere over the past few decades in the graphs below.
The world population has increased by 93% since 1975. In 1975, it was about 4 billion and by 2020 it is expected to be 7.8 billion.
I miss living in a college town. It is energizing to interact on a daily basis with well educated, intellectually curious, and cosmopolitan people who are passionate about their work. I lived in Ames, Iowa—where Iowa State University is located—for nearly 30 years, and I feel more at home in Stevens Point, a smaller community, than I do now in Ames. I think Stevens Point is the nicest community I have visited since leaving Ames in 2005. Definitely would be willing to live there someday. UW-Stevens Point even has a physics & astronomy department, an observatory, and a planetarium. Perhaps I could help out in retirement.
Some towns have a lot going for them even without a college or university—around here, Mineral Point and Spring Green come to mind. Some towns are at somewhat of a disadvantage because they have a name that is not particularly attractive. For example, Dodgeville, where I currently live and work, has a moniker that isn’t all that inviting. But there is no place so nice to live as a college town—for people like me, at least.
My primary civic interests are in gradually developing a well planned network of paved, off-road bike paths, walking trails through natural areas, a center for continuing education, a community astronomical observatory, and a comprehensive and well-enforced outdoor lighting ordinance to restore, preserve, and protect our nighttime environment and view of the night sky. Living in a community like Dodgeville, I don’t get the sense that there is enough interest or political will to make any of these things happen. I can’t do it alone.
9.2 Issue H: The possible existence of multiverses If there is a large enough ensemble of numerous universes with varying properties, it may be claimed that it becomes virtually certain that some of them will just happen to get things right, so that life can exist; and this can help explain the fine-tuned nature of many parameters whose values are otherwise unconstrained by physics. As discussed in the previous section, there are a number of ways in which, theoretically, multiverses could be realized. They provide a way of applying probability to the universe (because they deny the uniqueness of the universe). However, there are a number of problems with this concept. Besides, this proposal is observationally and experimentally untestable; thus its scientific status is debatable.
My 100-year-old uncle—a lifelong teacher and voracious reader who is still intellectually active—recently sent me Max Tegmark’s book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, published by Vintage Books in 2014. I could not have had a more engaging introduction to the concept of the Multiverse. Tegmark presents four levels of multiverses that might exist. They are
Level I Multiverse: Distant regions of space with the same laws of physics that are currently but not necessarily forever unobservable.
Level II Multiverse: Distant regions of space that may have different laws of physics and are forever unobservable.
Level III Multiverse: Quantum events at any location in space and in time cause reality to split and diverge along parallel storylines.
Level IV Multiverse: Space, time, and the Level I, II, and III multiverses all exist within mathematical structures that describe all physical existence at the most fundamental level.
There seems little question that our universe is very much larger than the part that we can observe. The vast majority of our universe is so far away that light has not yet had time to reach us from those regions. Whether we choose to call the totality of these regions the universe or a Level I multiverse is a matter of semantics.
Is our universe or the Level I multiverse infinite? Most likely not. That infinity is a useful mathematical construct is indisputable. That infinite space or infinite time exists is doubtful. Both Ellis and Tegmark agree on this and present cogent arguments as to why infinity cannot be associated with physical reality. Very, very large, or very, very small, yes, but not infinitely large or infinitely small.
Does a Level II, III, and IV multiverse exist? Tegmark thinks so, but Ellis raises several objections, noted above and elsewhere. The multiverse idea remains quite controversial, but as Tegmark writes,
Even those of my colleagues who dislike the multiverse idea now tend to grudgingly acknowledge that the basic arguments for it are reasonable. The main critique has shifted from “This makes no sense and I hate it” to “I hate it.”
I will not delve into the details of the Level II, III, and IV multiverses here. Read Tegmark’s book as he adroitly takes you through the details of eternal inflation, quantum mechanics and wave functions and the genius and tragic story of Hugh Everett III, the touching tribute to John Archibald Wheeler, and more, leading into a description of each multiverse level in detail.
I’d like to end this article with a quote from Max Tegmark from Mathematical Universe. It’s about when you think you’re the first person ever to discover something, only to find that someone else has made that discovery or had that idea before.
Gradually, I’ve come to totally change my feelings about getting scooped. First of all, the main reason I’m doing science is that I delight in discovering things, and it’s every bit as exciting to rediscover something as it is to be the first to discover it—because at the time of the discovery, you don’t know which is the case. Second, since I believe that there are other more advanced civilizations out there—in parallel universes if not our own—everything we come up with here on our particular planet is a rediscovery, and that fact clearly doesn’t spoil the fun. Third, when you discover something yourself, you probably understand it more deeply and you certainly appreciate it more. From studying history, I’ve also come to realize that a large fraction of all breakthroughs in science were repeatedly rediscovered—when the right questions are floating around and the tools to tackle them are available, many people will naturally find the same answers independently.
References Ellis, G.F.R., Issues in the Philosophy of Cosmology, Philosophy of Physics (Handbook of the Philosophy of Science), Ed. J. Butterfield and J. Earman (Elsevier, 2006), 1183-1285. [http://arxiv.org/abs/astro-ph/0602280]
Tegmark, Max. Our mathematical universe : my quest for the ultimate nature of reality. New York: Alfred A. Knopf, 2014.
“You passed your exam in many parallel universes—but not in this one.”
Aristotle (384 BC – 322 BC) may have been the first person to write that stars twinkle but planets don’t, though our understanding of twinkling has evolved since he explained that “The planets are near, so that the visual ray reaches them in its full vigour, but when it comes to the fixed stars it is quivering because of the distance and its excessive extension.”
John Stedman (1744-1797), a controversial and complicated figure to be sure, writes the following dialog between teacher and student in The Study of Astronomy,Adapted to the capacities of youth (1796):
PUPIL. How is the twinkling of the stars in a clear night accounted for?
TUTOR. It arises from the continual agitation of the air or atmosphere through which we view them; the particles of air being always in motion, will cause a twinkling in any distant luminous body, which shines with a strong light.
PUPIL. Then, I suppose, the planets not being luminous, is the reason why they do not twinkle.
TUTOR. Most certainly. The feeble light with which they shine is not sufficient to cause such an appearance.
Still not quite right, but closer to our current understanding. Our modern term for “twinkling” is atmospheric scintillation, which is changes in a star’s brightness caused by curved wavefronts focusing or defocusing starlight.
Scintillation is caused by refractive index variations (due to differences in pressure, temperature, and humidity) of “pockets” of air passing in front of the light path between star and observer at a typical height of about 5 miles. These pockets are typically about 3 inches across, so from the naked eye observer’s standpoint, they subtend an angle of about 2 arcseconds.
The largest angular diameters of stars are on the order of 50 milliarcseconds1 (R Doradus, Betelgeuse, and Mira), and only seventeen stars have an an angular diameter larger than 1 milliarcsecond. So, it is easy to see how cells of air on the order of 2 arcseconds across moving across the light path could cause the stars to flicker and flash as seen with the unaided eye.
The five planets that are easily visible to the unaided eye (Mercury, Venus, Mars, Jupiter, and Saturn) have angular diameters that range from 3.5 arcseconds (Mars, at its most distant) up to 66 arcseconds (Venus, at its closest). Since the disk of a planet subtends multiple air cells, the different refractive indexes tend to cancel each other out, and the planet shines with a steady light.
From my own experience watching meteors many nights with my friend Paul Martsching, our reclining lawn chairs just a few feet apart, I have sometimes seen a principal star briefly brighten by two magnitudes or more, with Paul seeing no change in the star’s brightness, and vice versa.
Stedman’s dialogue next turns to the distances to the nearest stars.
PUPIL. Have the stars then light in themselves?
TUTOR. They undoubtedly shine with their own native light, or we should not see even the nearest of them: the distance being so immensely great, that if a cannon-ball were to travel from it to the sun, with the same velocity with which it left the cannon, it would be more than 1 million, 868 thousand years, before it reached it.
He adds a footnote:
The distance of Syrius is 18,717,442,690,526 miles. A cannon-ball going at the rate of 1143 miles an hour, would only reach the sun in about 1,868,307 years, 88 days.
Where Stedman comes up with the velocity of a cannon-ball is unclear, but the Earth’s rotational speed at the equator is 1,040 mph, close to Stedman’s cannon-ball velocity of 1,143 mph. He states the distance to the brightest star Sirius—probably then thought to be the nearest star—is 18,717,442,690,526 miles or 3.18 light years, a bit short of the actual value of 8.60 light years. The first measurements of stellar parallax lie 42 years in the future when Stedman’s book was published.
1 1 milliarcsecond (1 mas) = 0.001 arcsecond
References Aristotle, De Caelo, Book 2, chap.8, par. 290a, 18 Crumey, A., 2014, MNRAS, 442, 2600 Dravins, D., Lindegren, L., Mezey, E., Young, A. T., 1997a, PASP, 109, 173 Ellison, M. A., & Seddon, H., 1952, MNRAS, 112, 73 Stedman, J., 1796, The Study of Astronomy, Adapted to the capacities of youth
The orbit of a comet can be defined with six numbers, called the orbital elements, and by entering these numbers into your favorite planetarium software, you can view the location of the comet at any given time reasonably near the epoch date. The epoch date is a particular date for which the orbital elements are calculated and therefore the most accurate around that time.
Different sets of six parameters can be used, but the most common are shown below. Example values are given for Comet Holmes (17P), which exhibited a remarkable outburst in October 2007, now almost 12 years ago.
Perihelion distance, q
This is the center-to-center distance from the comet to the Sun when the comet is at perihelion, its closest point to the Sun. For Comet Holmes, this is 2.05338 AU, well beyond the orbits of both the Earth and Mars.
Orbital eccentricity, e
This is a unitless number that is the measure of the amount of ellipticity an orbit has. For a circular orbit, e = 0. A parabolic orbit, e = 1. A hyperbolic orbit, e > 1. Many comets have highly elliptical orbits, often with e > 0.9. Short-period comets, such as Comet Holmes (17P), have more modest eccentricities. Comet Holmes has an orbital eccentricity of 0.432876. This means that at perihelion, Comet Holmes is 43.3% closer to the Sun than its midpoint distance, and at aphelion Comet Holmes is 43.3% further away from the Sun than its midpoint distance.
Date of perihelion, T
This is a date (converted to decimal Julian date) that the comet reached perihelion, or will next reach perihelion. For example, Comet Holmes reached perihelion on 2007 May 5.0284.
Inclination to the Ecliptic Plane,i
This is the angle made by the intersection of the plane of the comet’s orbit with the ecliptic, the plane of the Earth’s orbit. Comet Holmes has an inclination angle of 19.1143°.
Longitude of the ascending node, Ω
The intersection between the comet’s orbital plane and the Earth’s orbital plane forms a line, called the line of nodes. The places where this line intersects the comet’s orbit forms two points. One point defines the location where the comet crosses the ecliptic plane heading from south to north. This is called the ascending node. The other point defines the location where the comet crosses the ecliptic plane heading from north to south. This is called the descending node. 0° longitude is arbitrarily defined to be the direction of the vernal equinox, the point in the sky where the Sun in its apparent path relative to the background stars crosses the celestial equator heading north. The longitude of the ascending node (capital Omega, Ω) is the angle, measured eastward (in the direction of the Earth’s orbital motion) from the vernal equinox to the ascending node of the comet’s orbit. For Comet Holmes, that angle is 326.8532°.
Argument of perihelion, ω
The angle along the comet’s orbit in the direction of the comet’s motion between its perihelion point and its ascending node (relative to the ecliptic plane) is called the argument of perihelion (small omega, ω). For Comet Holmes, this angle is 24.352°.
If all the mass of the Sun and the comet were concentrated at a geometric point, and if they were the only two objects in the universe, these six orbital elements would be fixed for all time. But these two objects have physical size, and are affected by the gravitational pull of other objects in our solar system and beyond. Moreover, nongravitational forces can act on the comet’s nucleus, such as jets of material spewing out into space, exerting a tiny but non-negligible thrust on the comet, thus altering its orbit. Because of these effects, in practice it is a good idea to define a set of osculating orbital elements which will give the best positions for the comet around a particular date. These osculating orbital elements change gradually with time (due to gravitational perturbations and non-gravitational forces acting on the comet) and give the best approximation to the orbit at a given point in time. The further one strays from the epoch date for the osculating elements, the less accurate the predicted position of the comet will be.
For example, the IAU Minor Planet Center gives a set of orbital elements for Comet Holmes that has a more recent epoch date than the one given by the JPL Small-Body Database Browser. The MPC gives an epoch date of 2015 Jun 27.0, reasonably near the date of the most recent perihelion passage of this P = 6.89y comet (2014 Mar 27.5736). JPL, on the other hand, provides a default epoch date of 2010 Jan 17.0, nearer the date of the 2007 May 5.0284 perihelion and the spectacular October 2007 apparition. For the most accurate current position of Comet Holmes in your planetarium software, you’ll probably want to use the MPC orbital elements, since they are for an epoch nearest to the date when you’ll be making your observations.
The spectral type classification scheme for stars is, among other things, a temperature sequence. A helpful mnemonic for remembering the sequence is Oh, Be A Fine Girl (Guy) Kiss Me Like This, Yes! The O stars have the highest surface temperatures, up to 56,000 K (100,000° F), while the Y infrared dwarfs (brown dwarfs) have surface temperatures as cool as 250 K (-10° F).
Here are the brightest representatives of each of these spectra classes readily visible from the northern hemisphere. Apparent visual magnitude (V-band) is given unless otherwise noted.
9.1.6 The metaphysical options …there appear to be basically six approaches to the issue of ultimate causation: namely Random Chance, Necessity, High Probability, Universality, Cosmological Natural Selection, and Design. We briefly consider these in turn. Option 1: Random Chance, signifying nothing. The initial conditions in the Universe just happened, and led to things being the way they are now, by pure chance. Probability does not apply. There is no further level of explanation that applies; searching for ‘ultimate causes’ has no meaning. This is certainly logically possible, but not satisfying as an explanation, as we obtain no unification of ideas or predictive power from this approach. Nevertheless some implicitly or explicitly hold this view. Option 2: Necessity. Things have to be the way they are; there is no other option. The features we see and the laws underlying them are demanded by the unity of the Universe: coherence and consistency require that things must be the way they are; the apparent alternatives are illusory. Only one kind of physics is self-consistent: all logically possible universes must obey the same physics. To really prove this would be a very powerful argument, potentially leading to a self-consistent and complete scientific view. But we can imagine alternative universes! —why are they excluded? Furthermore we run here into the problem that we have not succeeded in devising a fully self-consistent view of physics: neither the foundations of quantum physics nor of mathematics are on a really solid consistent basis. Until these issues are resolved, this line cannot be pursued to a successful conclusion. Option 3: High probability. Although the structure of the Universe appears very improbable, for physical reasons it is in fact highly probable. These arguments are only partially successful, even in their own terms. They run into problems if we consider the full set of possibilities: discussions proposing this kind of view actually implicitly or explicitly restrict the considered possibilities a priori, for otherwise it is not very likely the Universe will be as we see it. Besides, we do not have a proper measure to apply to the set of initial conditions, enabling us to assess these probabilities. Furthermore, application of probability arguments to the Universe itself is dubious, because the Universe is unique. Despite these problems, this approach has considerable support in the scientific community, for example it underlies the chaotic inflationary proposal. It attains its greatest power in the context of the assumption of universality: Option 4: Universality. This is the stand that “All that is possible, happens”: an ensemble of universes or of disjoint expanding universe domains is realized in reality, in which all possibilities occur. In its full version, the anthropic principle is realized in both its strong form (if all that is possible happens, then life must happen) and its weak form (life will only occur in some of the possibilities that are realized; these are picked out from the others by the WAP, viewed as a selection principle). There are four ways this has been pursued. 1: Spatial variation. The variety of expanding universe domains is realised in space through random initial conditions, as in chaotic inflation. While this provides a legitimate framework for application of probability, from the viewpoint of ultimate explanation it does not really succeed, for there is still then one unique Universe whose (random) initial conditions need explanation. Initial conditions might be globally statistically homogeneous, but also there could be global gradients in some physical quantities so that the Universe is not statistically homogeneous; and these conditions might be restricted to some domain that does not allow life. It is a partial implementation of the ensemble idea; insofar as it works, it is really a variant of the “high probability” idea mentioned above. If it was the more or less unique outcome of proven physics, then that would provide a good justification; but the physics underlying such proposals is not even uniquely defined, much less tested. Simply claiming a particular scalar field with some specific stated potential exists does not prove that it exists! 2: Time variation. The variety of expanding universe domains could be realised across time, in a universe that has many expansion phases (a Phoenix universe), whether this occurs globally or locally. Much the same comments apply as in the previous case. 3: Quantum Mechanical. It could occur through the existence of the Everett-Wheeler “many worlds” of quantum cosmology, where all possibilities occur through quantum branching. This is one of the few genuine alternatives proposed to the Copenhagen interpretation of quantum mechanics, which leads to the necessity of an observer, and so potentially to the Strong Anthropic interpretation considered above. The many-worlds proposal is controversial: it occurs in a variety of competing formulations, none of which has attained universal acceptance. The proposal does not provide a causal explanation for the particular events that actually occur: if we hold to it, we then have to still explain the properties of the particular history we observe (for example, why does our macroscopic universe have high symmetries when almost all the branchings will not?). And above all it is apparently untestable: there is no way to experimentally prove the existence of all those other branching universes, precisely because the theory gives the same observable predictions as the standard theory. 4: Completely disconnected. They could occur as completely disconnected universes: there really is an ensemble of universes in which all possibilities occur, without any connection with each other. A problem that arises then is, What determines what is possible? For example, what about the laws of logic themselves? Are they inviolable in considering all possibilities? We cannot answer, for we have no access to this multitude of postulated worlds. We explore this further below. In all these cases, major problems arise in relating this view to testability and so we have to query the meaningfulness of the proposals as scientific explanations. They all contradict Ockham’s razor: we “solve” one issue at the expense of envisaging an enormously more complex existential reality. Furthermore, they do not solve the ultimate question: Why does this ensemble of universes exist? One might suggest that ultimate explanation of such a reality is even more problematic than in the case of single universe. Nevertheless this approach has an internal logic of its own which some find compelling. Option 5: Cosmological Natural Selection. If a process of re-expansion after collapse to a black hole were properly established, it opens the way to the concept not merely of evolution of the Universe in the sense that its structure and contents develop in time, but in the sense that the Darwinian selection of expanding universe regions could take place, as proposed by Smolin. The idea is that there could be collapse to black holes followed by re-expansion, but with an alteration of the constants of physics through each transition, so that each time there is an expansion phase, the action of physics is a bit different. The crucial point then is that some values of the constants will lead to production of more black holes, while some will result in less. This allows for evolutionary selection favouring the expanding universe regions that produce more black holes (because of the favourable values of physical constants operative in those regions), for they will have more “daughter” expanding universe regions. Thus one can envisage natural selection favouring those physical constants that produce the maximum number of black holes. The problem here is twofold. First, the supposed ‘bounce’ mechanism has never been fully explicated. Second, it is not clear—assuming this proposed process can be explicated in detail—that the physics which maximizes black hole production is necessarily also the physics that favours the existence of life. If this argument could be made water-tight, this would become probably the most powerful of the multiverse proposals. Option 6: Purpose or Design. The symmetries and delicate balances we observe require an extraordinary coherence of conditions and cooperation of causes and effects, suggesting that in some sense they have been purposefully designed. That is, they give evidence of intention, both in the setting of the laws of physics and in the choice of boundary conditions for the Universe. This is the sort of view that underlies Judaeo-Christian theology. Unlike all the others, it introduces an element of meaning, of signifying something. In all the other options, life exists by accident; as a chance by-product of processes blindly at work. The prime disadvantage of this view, from the scientific viewpoint, is its lack of testable scientific consequences (“Because God exists, I predict that the density of matter in the Universe should be x and the fine structure constant should be y”). This is one of the reasons scientists generally try to avoid this approach. There will be some who will reject this possibility out of hand, as meaningless or as unworthy of consideration. However it is certainly logically possible. The modern version, consistent with all the scientific discussion preceding, would see some kind of purpose underlying the existence and specific nature of the laws of physics and the boundary conditions for the Universe, in such a way that life (and eventually humanity) would then come into existence through the operation of those laws, then leading to the development of specific classes of animals through the process of evolution as evidenced in the historical record. Given an acceptance of evolutionary development, it is precisely in the choice and implementation of particular physical laws and initial conditions, allowing such development, that the profound creative activity takes place; and this is where one might conceive of design taking place. [This is not the same as the view proposed by the ‘Intelligent Design’ movement. It does not propose that God tweaks the outcome of evolutionary processes.] However from the viewpoint of the physical sciences per se, there is no reason to accept this argument. Indeed from this viewpoint there is really no difference between design and chance, for they have not been shown to lead to different physical predictions.
A few comments.
1: Random chance. At first, this strikes one as intellectual laziness, but perhaps it is more a reflection of our own intellectual weakness. More on that in a moment.
2: Necessity. Our intellectual journey of discovery and greater understanding must continue, and it may eventually lead us to this conclusion. But not now.
3: High probability. How can we talk about probability when n = 1?
4: Universality. We can hypothesize the existence of other universes, yes, but if we have no way to observe or interact with them, how can we call this endeavor science? Furthermore, explaining the existence of multiple universes seems even more problematic that explaining the existence of a single universe—ours.
5: Cosmological Natural Selection. We do not know that black holes can create other universes, or that universes that contain life are more likely to have laws of physics that allow an abundance of black holes
6. Purpose of Design. The presupposition of design is not evidence of design. It is possible that scientific evidence of a creator or designer might be found in nature—such as an encoded message evincing purposeful intelligence in DNA or the cosmic microwave background—but to date no such evidence has been found. Even if evidence of a creator is forthcoming, how do we explain the existence of the creator?
I would now like to suggest a seventh option (possibly a variant of Ellis’s Option 1 Random Chance or Option 2 Necessity).
7. Indeterminate Due to Insufficient Intelligence. It is at least possible that there are aspects of reality and our origins that may be beyond what humans are currently capable of understanding. For some understanding of how this might be possible, we need look no further than the primates we are most closely related to, and other mammals. Is a chimpanzee self-aware? Can non-humans experience puzzlement? Are animals aware of their own mortality? Even if the answer to all these questions is “yes”1, there are clearly many things humans can do that no other animal is capable of. Why stop at humans? Isn’t it reasonable to assume that there is much that humans are cognitively incapable of?
Why do we humans develop remarkable technologies and yet fail dismally to eradicate poverty, war, and other violence? Why does the world have so many religions if they are not all imperfect and very human attempts to imbue our lives with meaning?
What is consciousness? Will we ever understand it? Can we extrapolate from our current intellectual capabilities to a complete understanding of our origins and the origins of the universe, or is something more needed that we currently cannot even envision?
“Sometimes attaining the deepest familiarity with a question is our best substitute for actually having the answer.” —Brian Greene, The Elegant Universe
“To ask what happens before the Big Bang is a bit like asking what happens on the surface of the earth one mile north of the North Pole. It’s a meaningless question.” —Stephen Hawking, Interview with Timothy Ferris, Pasadena, 1985
1 For more on the topic of the emotional and cognitive similarities between animals and humans, see “Mama’s Last Hug: Animal Emotions and What They Tell Us about Ourselves” by primatologist Frans de Waal, W. W. Norton & Company (2019). https://www.amazon.com/dp/B07DP6MM92 .
References G.F.R. Ellis, Issues in the Philosophy of Cosmology, Philosophy of Physics (Handbook of the Philosophy of Science), Ed. J. Butterfield and J. Earman (Elsevier, 2006), 1183-1285. [http://arxiv.org/abs/astro-ph/0602280]
If you’re an astronomy teacher that likes to put a trick question on an open book quiz or test once in a while to encourage your students to think more deeply, here’s a good one for you:
On average, what planet is closest to the Earth?
The correct answer is C. Mercury.
Huh? Venus comes closest to the Earth, doesn’t it? Yes, but there is a big difference between minimum distance and average distance. Let’s do some quick calculations to help us understand minimum distance first, and then we’ll discuss the more involved determination of average distance.
Here’s some easily-found data on the terrestrial planets:
I’ve intentionally left the last two columns of the table empty. We’ll come back to those in a moment. a is the semi-major axis of each planet’s orbit around the Sun, in astronomical units (AU). It is often taken that this is the planet’s average distance from the Sun, but that is strictly true only for a circular orbit.1e is the orbital eccentricity, which is a unitless number. The closer the value is to 0.0, the more circular the orbit. The closer the value is to 1.0, the more elliptical the orbit, with 1.0 being a parabola.
The two empty columns are for q the perihelion distance, and Q the aphelion distance. Perihelion occurs when the planet is closest to the Sun. Aphelion occurs when the planet is farthest from the Sun. How do we calculate the perihelion and aphelion distance? It’s easy.
Perihelion: q = a (1 – e)
Aphelion: q = a (1 + e)
Now, let’s fill in the rest of our table.
Ignoring, for a moment, each planet’s orbital eccentricity, we can calculate the “average” closest approach distance between any two planets by simply taking the difference in their semi-major axes. For Venus, it is 1.000 – 0.723 = 0.277 AU, and for Mars, it is 1.524 – 1.000 = 0.524 AU. We see that Venus comes closest to the Earth.
But, sometimes, Venus and Mars come even closer to the Earth than 0.277 AU and 0.524 AU, respectively. The minimum minimum distance between Venus and the Earth in conjunction should occur when Venus is at aphelion at the same time as Earth is at perihelion: 0.983 – 0.728 = 0.255 AU. The minimum minimum distance between Earth and Mars at opposition should occur when Mars is at perihelion and Earth is at aphelion: 1.382 – 1.017 = 0.365 AU. Mars does not ever come as close to the Earth as Venus does at every close approach.
The above assumes that all the terrestrial planets orbit in the same plane, which they do not. Mercury has an orbital inclination relative to the ecliptic of 7.004˚, Venus 3.395˚, Earth 0.000˚ (by definition), and Mars 1.848˚. Calculating the distances in 3D will change the values a little, but not by much.
Now let’s switch gears and find the average distance over time between Earth and the other terrestrial planets—a very different question. But we want to pick a time period to average over that is sufficiently long enough that each planet spends as much time on the opposite side of the Sun from us as it does on our side of the Sun. The time interval between successive conjunctions (in the case of Mercury and Venus) or oppositions (Mars) is called the synodic period and is calculated as follows:
P1 = 87.9691d = orbital period of Mercury
P2 = 224.701d = orbital period of Venus
P3 = 365.256d = orbital period of Earth
P4 = 686.971d = orbital period of Mars
S1 = (P1-1 – P3-1)-1 = synodic period of Mercury = 115.877d
S2 = (P2-1 – P3-1)-1 = synodic period of Venus = 583.924d
S4 = (P3-1 – P4-1)-1 = synodic period of Mars = 779.946d
I wrote a quick little SAS program to numerically determine that an interval of 9,387 days (25.7 years) would be a good choice, because
9387 / 115.877 = 81.0083, for Mercury
9387 / 583.924 = 16.0757, for Venus
9387 / 779.946 = 12.0354, for Mars
The U.S Naval Observatory provides a free computer program called the Multiyear Interactive Computer Almanac (MICA), so I was able to quickly generate a file for each planet, Mercury, Venus, and Mars, giving the Earth-to-planet distance for 9,387 days beginning 0h UT 1 May 2019 through 0h UT 10 Jan 2045. Here are the results:
As you can see, averaged over time, Mercury is the nearest planet to the Earth!
For a more mathematical treatment, see the article in the 12 Mar 2019 issue of Physics Today.