Aristotle (384 BC – 322 BC) may have been the first person to write that stars twinkle but planets don’t, though our understanding of twinkling has evolved since he explained that “The planets are near, so that the visual ray reaches them in its full vigour, but when it comes to the fixed stars it is quivering because of the distance and its excessive extension.”
John Stedman (1744-1797), a controversial and complicated figure to be sure, writes the following dialog between teacher and student in The Study of Astronomy,Adapted to the capacities of youth (1796):
PUPIL. How is the twinkling of the stars in a clear night accounted for?
TUTOR. It arises from the continual agitation of the air or atmosphere through which we view them; the particles of air being always in motion, will cause a twinkling in any distant luminous body, which shines with a strong light.
PUPIL. Then, I suppose, the planets not being luminous, is the reason why they do not twinkle.
TUTOR. Most certainly. The feeble light with which they shine is not sufficient to cause such an appearance.
Still not quite right, but closer to our current understanding. Our modern term for “twinkling” is atmospheric scintillation, which is changes in a star’s brightness caused by curved wavefronts focusing or defocusing starlight.
Scintillation is caused by refractive index variations (due to differences in pressure, temperature, and humidity) of “pockets” of air passing in front of the light path between star and observer at a typical height of about 5 miles. These pockets are typically about 3 inches across, so from the naked eye observer’s standpoint, they subtend an angle of about 2 arcseconds.
The largest angular diameters of stars are on the order of 50 milliarcseconds1 (R Doradus, Betelgeuse, and Mira), and only seventeen stars have an an angular diameter larger than 1 milliarcsecond. So, it is easy to see how cells of air on the order of 2 arcseconds across moving across the light path could cause the stars to flicker and flash as seen with the unaided eye.
The five planets that are easily visible to the unaided eye (Mercury, Venus, Mars, Jupiter, and Saturn) have angular diameters that range from 3.5 arcseconds (Mars, at its most distant) up to 66 arcseconds (Venus, at its closest). Since the disk of a planet subtends multiple air cells, the different refractive indexes tend to cancel each other out, and the planet shines with a steady light.
From my own experience watching meteors many nights with my friend Paul Martsching, our reclining lawn chairs just a few feet apart, I have sometimes seen a principal star briefly brighten by two magnitudes or more, with Paul seeing no change in the star’s brightness, and vice versa.
Stedman’s dialogue next turns to the distances to the nearest stars.
PUPIL. Have the stars then light in themselves?
TUTOR. They undoubtedly shine with their own native light, or we should not see even the nearest of them: the distance being so immensely great, that if a cannon-ball were to travel from it to the sun, with the same velocity with which it left the cannon, it would be more than 1 million, 868 thousand years, before it reached it.
He adds a footnote:
The distance of Syrius is 18,717,442,690,526 miles. A cannon-ball going at the rate of 1143 miles an hour, would only reach the sun in about 1,868,307 years, 88 days.
Where Stedman comes up with the velocity of a cannon-ball is unclear, but the Earth’s rotational speed at the equator is 1,040 mph, close to Stedman’s cannon-ball velocity of 1,143 mph. He states the distance to the brightest star Sirius—probably then thought to be the nearest star—is 18,717,442,690,526 miles or 3.18 light years, a bit short of the actual value of 8.60 light years. The first measurements of stellar parallax lie 42 years in the future when Stedman’s book was published.
1 1 milliarcsecond (1 mas) = 0.001 arcsecond
References Aristotle, De Caelo, Book 2, chap.8, par. 290a, 18 Crumey, A., 2014, MNRAS, 442, 2600 Dravins, D., Lindegren, L., Mezey, E., Young, A. T., 1997a, PASP, 109, 173 Ellison, M. A., & Seddon, H., 1952, MNRAS, 112, 73 Stedman, J., 1796, The Study of Astronomy, Adapted to the capacities of youth
I’ve been writing programs in SAS since 1985. Back then, it was SAS 5.15 on an IBM mainframe computer (remember JCL, TSO, ISPF?) at the Iowa Department of Transportation. Today, it is SAS 9.4, under Windows 10 at home and Linux at work.
I love this language. It is elegant. It is beautiful. I’ve become an expert. I’ve never had a computational problem to solve, a data manipulation to do, a process to automate, or a report to write that I couldn’t do with SAS.
New features are being added all the time, and I am constantly learning and improving to keep up with it all. And the legacy code still runs just fine. Peace of mind. The company behind this success story is SAS Institute, based in Cary, North Carolina. SAS Institute has the best technical support of any company I have ever dealt with, and that is as true today as it was in 1985, and all the years in between. Again, peace of mind.
I’ve heard from multiple sources that SAS Institute is a fabulous place to work, and it shows in their software, their customer service, and the passion their employees have for making SAS software the best it can be—and helping us solve just about any analytics problem. Inspiring. And you won’t find a more passionate user community anywhere. At least not with any company that has been around as long as SAS has (since 1976).
SAS Institute is the world’s largest private software company, and being privately owned has much to do with their success and consistency, I believe. No greedy shareholders to please. SAS Institute need answer only to their customers, and to their employees. That’s the way it should be.
Computer languages have come in and out of vogue over the years: FORTRAN, PL/I, Pascal, C, C++, Perl, Java, R, Python, etc., and with each new language that comes along, SAS absorbs the best elements and moves forward to the next challenge.
Python is currently very popular, as is open source in general, and I have no doubt that SAS will incorporate the most valuable functionality of Python and open source (already in progress) and keep tooling along like a well-oiled machine. In another ten years, SAS will be incorporating another new language that will have supplanted Python as the programming language du jour.
You’ve got to admire a company like that. In an era when everyone wants — even expects — “stuff for free”, the old adage “you get what you pay for” still applies. Yes, SAS is expensive—and I’m hoping their mature “core” product will come down in price—but I can’t complain too loudly because quality, longevity, and dependability costs money. It always has.
I’ve noticed that our younger open source programmers use a lot of different tools to do their work. One big advantage of SAS is that I can do most of my work using one tool – SAS. SAS provides a beautifully integrated and far-reaching data analytics environment.
The orbit of a comet can be defined with six numbers, called the orbital elements, and by entering these numbers into your favorite planetarium software, you can view the location of the comet at any given time reasonably near the epoch date. The epoch date is a particular date for which the orbital elements are calculated and therefore the most accurate around that time.
Different sets of six parameters can be used, but the most common are shown below. Example values are given for Comet Holmes (17P), which exhibited a remarkable outburst in October 2007, now almost 12 years ago.
Perihelion distance, q
This is the center-to-center distance from the comet to the Sun when the comet is at perihelion, its closest point to the Sun. For Comet Holmes, this is 2.05338 AU, well beyond the orbits of both the Earth and Mars.
Orbital eccentricity, e
This is a unitless number that is the measure of the amount of ellipticity an orbit has. For a circular orbit, e = 0. A parabolic orbit, e = 1. A hyperbolic orbit, e > 1. Many comets have highly elliptical orbits, often with e > 0.9. Short-period comets, such as Comet Holmes (17P), have more modest eccentricities. Comet Holmes has an orbital eccentricity of 0.432876. This means that at perihelion, Comet Holmes is 43.3% closer to the Sun than its midpoint distance, and at aphelion Comet Holmes is 43.3% further away from the Sun than its midpoint distance.
Date of perihelion, T
This is a date (converted to decimal Julian date) that the comet reached perihelion, or will next reach perihelion. For example, Comet Holmes reached perihelion on 2007 May 5.0284.
Inclination to the Ecliptic Plane,i
This is the angle made by the intersection of the plane of the comet’s orbit with the ecliptic, the plane of the Earth’s orbit. Comet Holmes has an inclination angle of 19.1143°.
Longitude of the ascending node, Ω
The intersection between the comet’s orbital plane and the Earth’s orbital plane forms a line, called the line of nodes. The places where this line intersects the comet’s orbit forms two points. One point defines the location where the comet crosses the ecliptic plane heading from south to north. This is called the ascending node. The other point defines the location where the comet crosses the ecliptic plane heading from north to south. This is called the descending node. 0° longitude is arbitrarily defined to be the direction of the vernal equinox, the point in the sky where the Sun in its apparent path relative to the background stars crosses the celestial equator heading north. The longitude of the ascending node (capital Omega, Ω) is the angle, measured eastward (in the direction of the Earth’s orbital motion) from the vernal equinox to the ascending node of the comet’s orbit. For Comet Holmes, that angle is 326.8532°.
Argument of perihelion, ω
The angle along the comet’s orbit in the direction of the comet’s motion between its perihelion point and its ascending node (relative to the ecliptic plane) is called the argument of perihelion (small omega, ω). For Comet Holmes, this angle is 24.352°.
If all the mass of the Sun and the comet were concentrated at a geometric point, and if they were the only two objects in the universe, these six orbital elements would be fixed for all time. But these two objects have physical size, and are affected by the gravitational pull of other objects in our solar system and beyond. Moreover, nongravitational forces can act on the comet’s nucleus, such as jets of material spewing out into space, exerting a tiny but non-negligible thrust on the comet, thus altering its orbit. Because of these effects, in practice it is a good idea to define a set of osculating orbital elements which will give the best positions for the comet around a particular date. These osculating orbital elements change gradually with time (due to gravitational perturbations and non-gravitational forces acting on the comet) and give the best approximation to the orbit at a given point in time. The further one strays from the epoch date for the osculating elements, the less accurate the predicted position of the comet will be.
For example, the IAU Minor Planet Center gives a set of orbital elements for Comet Holmes that has a more recent epoch date than the one given by the JPL Small-Body Database Browser. The MPC gives an epoch date of 2015 Jun 27.0, reasonably near the date of the most recent perihelion passage of this P = 6.89y comet (2014 Mar 27.5736). JPL, on the other hand, provides a default epoch date of 2010 Jan 17.0, nearer the date of the 2007 May 5.0284 perihelion and the spectacular October 2007 apparition. For the most accurate current position of Comet Holmes in your planetarium software, you’ll probably want to use the MPC orbital elements, since they are for an epoch nearest to the date when you’ll be making your observations.
The spectral type classification scheme for stars is, among other things, a temperature sequence. A helpful mnemonic for remembering the sequence is Oh, Be A Fine Girl (Guy) Kiss Me Like This, Yes! The O stars have the highest surface temperatures, up to 56,000 K (100,000° F), while the Y infrared dwarfs (brown dwarfs) have surface temperatures as cool as 250 K (-10° F).
Here are the brightest representatives of each of these spectra classes readily visible from the northern hemisphere. Apparent visual magnitude (V-band) is given unless otherwise noted.
9.1.6 The metaphysical options …there appear to be basically six approaches to the issue of ultimate causation: namely Random Chance, Necessity, High Probability, Universality, Cosmological Natural Selection, and Design. We briefly consider these in turn. Option 1: Random Chance, signifying nothing. The initial conditions in the Universe just happened, and led to things being the way they are now, by pure chance. Probability does not apply. There is no further level of explanation that applies; searching for ‘ultimate causes’ has no meaning. This is certainly logically possible, but not satisfying as an explanation, as we obtain no unification of ideas or predictive power from this approach. Nevertheless some implicitly or explicitly hold this view. Option 2: Necessity. Things have to be the way they are; there is no other option. The features we see and the laws underlying them are demanded by the unity of the Universe: coherence and consistency require that things must be the way they are; the apparent alternatives are illusory. Only one kind of physics is self-consistent: all logically possible universes must obey the same physics. To really prove this would be a very powerful argument, potentially leading to a self-consistent and complete scientific view. But we can imagine alternative universes! —why are they excluded? Furthermore we run here into the problem that we have not succeeded in devising a fully self-consistent view of physics: neither the foundations of quantum physics nor of mathematics are on a really solid consistent basis. Until these issues are resolved, this line cannot be pursued to a successful conclusion. Option 3: High probability. Although the structure of the Universe appears very improbable, for physical reasons it is in fact highly probable. These arguments are only partially successful, even in their own terms. They run into problems if we consider the full set of possibilities: discussions proposing this kind of view actually implicitly or explicitly restrict the considered possibilities a priori, for otherwise it is not very likely the Universe will be as we see it. Besides, we do not have a proper measure to apply to the set of initial conditions, enabling us to assess these probabilities. Furthermore, application of probability arguments to the Universe itself is dubious, because the Universe is unique. Despite these problems, this approach has considerable support in the scientific community, for example it underlies the chaotic inflationary proposal. It attains its greatest power in the context of the assumption of universality: Option 4: Universality. This is the stand that “All that is possible, happens”: an ensemble of universes or of disjoint expanding universe domains is realized in reality, in which all possibilities occur. In its full version, the anthropic principle is realized in both its strong form (if all that is possible happens, then life must happen) and its weak form (life will only occur in some of the possibilities that are realized; these are picked out from the others by the WAP, viewed as a selection principle). There are four ways this has been pursued. 1: Spatial variation. The variety of expanding universe domains is realised in space through random initial conditions, as in chaotic inflation. While this provides a legitimate framework for application of probability, from the viewpoint of ultimate explanation it does not really succeed, for there is still then one unique Universe whose (random) initial conditions need explanation. Initial conditions might be globally statistically homogeneous, but also there could be global gradients in some physical quantities so that the Universe is not statistically homogeneous; and these conditions might be restricted to some domain that does not allow life. It is a partial implementation of the ensemble idea; insofar as it works, it is really a variant of the “high probability” idea mentioned above. If it was the more or less unique outcome of proven physics, then that would provide a good justification; but the physics underlying such proposals is not even uniquely defined, much less tested. Simply claiming a particular scalar field with some specific stated potential exists does not prove that it exists! 2: Time variation. The variety of expanding universe domains could be realised across time, in a universe that has many expansion phases (a Phoenix universe), whether this occurs globally or locally. Much the same comments apply as in the previous case. 3: Quantum Mechanical. It could occur through the existence of the Everett-Wheeler “many worlds” of quantum cosmology, where all possibilities occur through quantum branching. This is one of the few genuine alternatives proposed to the Copenhagen interpretation of quantum mechanics, which leads to the necessity of an observer, and so potentially to the Strong Anthropic interpretation considered above. The many-worlds proposal is controversial: it occurs in a variety of competing formulations, none of which has attained universal acceptance. The proposal does not provide a causal explanation for the particular events that actually occur: if we hold to it, we then have to still explain the properties of the particular history we observe (for example, why does our macroscopic universe have high symmetries when almost all the branchings will not?). And above all it is apparently untestable: there is no way to experimentally prove the existence of all those other branching universes, precisely because the theory gives the same observable predictions as the standard theory. 4: Completely disconnected. They could occur as completely disconnected universes: there really is an ensemble of universes in which all possibilities occur, without any connection with each other. A problem that arises then is, What determines what is possible? For example, what about the laws of logic themselves? Are they inviolable in considering all possibilities? We cannot answer, for we have no access to this multitude of postulated worlds. We explore this further below. In all these cases, major problems arise in relating this view to testability and so we have to query the meaningfulness of the proposals as scientific explanations. They all contradict Ockham’s razor: we “solve” one issue at the expense of envisaging an enormously more complex existential reality. Furthermore, they do not solve the ultimate question: Why does this ensemble of universes exist? One might suggest that ultimate explanation of such a reality is even more problematic than in the case of single universe. Nevertheless this approach has an internal logic of its own which some find compelling. Option 5: Cosmological Natural Selection. If a process of re-expansion after collapse to a black hole were properly established, it opens the way to the concept not merely of evolution of the Universe in the sense that its structure and contents develop in time, but in the sense that the Darwinian selection of expanding universe regions could take place, as proposed by Smolin. The idea is that there could be collapse to black holes followed by re-expansion, but with an alteration of the constants of physics through each transition, so that each time there is an expansion phase, the action of physics is a bit different. The crucial point then is that some values of the constants will lead to production of more black holes, while some will result in less. This allows for evolutionary selection favouring the expanding universe regions that produce more black holes (because of the favourable values of physical constants operative in those regions), for they will have more “daughter” expanding universe regions. Thus one can envisage natural selection favouring those physical constants that produce the maximum number of black holes. The problem here is twofold. First, the supposed ‘bounce’ mechanism has never been fully explicated. Second, it is not clear—assuming this proposed process can be explicated in detail—that the physics which maximizes black hole production is necessarily also the physics that favours the existence of life. If this argument could be made water-tight, this would become probably the most powerful of the multiverse proposals. Option 6: Purpose or Design. The symmetries and delicate balances we observe require an extraordinary coherence of conditions and cooperation of causes and effects, suggesting that in some sense they have been purposefully designed. That is, they give evidence of intention, both in the setting of the laws of physics and in the choice of boundary conditions for the Universe. This is the sort of view that underlies Judaeo-Christian theology. Unlike all the others, it introduces an element of meaning, of signifying something. In all the other options, life exists by accident; as a chance by-product of processes blindly at work. The prime disadvantage of this view, from the scientific viewpoint, is its lack of testable scientific consequences (“Because God exists, I predict that the density of matter in the Universe should be x and the fine structure constant should be y”). This is one of the reasons scientists generally try to avoid this approach. There will be some who will reject this possibility out of hand, as meaningless or as unworthy of consideration. However it is certainly logically possible. The modern version, consistent with all the scientific discussion preceding, would see some kind of purpose underlying the existence and specific nature of the laws of physics and the boundary conditions for the Universe, in such a way that life (and eventually humanity) would then come into existence through the operation of those laws, then leading to the development of specific classes of animals through the process of evolution as evidenced in the historical record. Given an acceptance of evolutionary development, it is precisely in the choice and implementation of particular physical laws and initial conditions, allowing such development, that the profound creative activity takes place; and this is where one might conceive of design taking place. [This is not the same as the view proposed by the ‘Intelligent Design’ movement. It does not propose that God tweaks the outcome of evolutionary processes.] However from the viewpoint of the physical sciences per se, there is no reason to accept this argument. Indeed from this viewpoint there is really no difference between design and chance, for they have not been shown to lead to different physical predictions.
A few comments.
1: Random chance. At first, this strikes one as intellectual laziness, but perhaps it is more a reflection of our own intellectual weakness. More on that in a moment.
2: Necessity. Our intellectual journey of discovery and greater understanding must continue, and it may eventually lead us to this conclusion. But not now.
3: High probability. How can we talk about probability when n = 1?
4: Universality. We can hypothesize the existence of other universes, yes, but if we have no way to observe or interact with them, how can we call this endeavor science? Furthermore, explaining the existence of multiple universes seems even more problematic that explaining the existence of a single universe—ours.
5: Cosmological Natural Selection. We do not know that black holes can create other universes, or that universes that contain life are more likely to have laws of physics that allow an abundance of black holes
6. Purpose of Design. The presupposition of design is not evidence of design. It is possible that scientific evidence of a creator or designer might be found in nature—such as an encoded message evincing purposeful intelligence in DNA or the cosmic microwave background—but to date no such evidence has been found. Even if evidence of a creator is forthcoming, how do we explain the existence of the creator?
I would now like to suggest a seventh option (possibly a variant of Ellis’s Option 1 Random Chance or Option 2 Necessity).
7. Indeterminate Due to Insufficient Intelligence. It is at least possible that there are aspects of reality and our origins that may be beyond what humans are currently capable of understanding. For some understanding of how this might be possible, we need look no further than the primates we are most closely related to, and other mammals. Is a chimpanzee self-aware? Can non-humans experience puzzlement? Are animals aware of their own mortality? Even if the answer to all these questions is “yes”1, there are clearly many things humans can do that no other animal is capable of. Why stop at humans? Isn’t it reasonable to assume that there is much that humans are cognitively incapable of?
Why do we humans develop remarkable technologies and yet fail dismally to eradicate poverty, war, and other violence? Why does the world have so many religions if they are not all imperfect and very human attempts to imbue our lives with meaning?
What is consciousness? Will we ever understand it? Can we extrapolate from our current intellectual capabilities to a complete understanding of our origins and the origins of the universe, or is something more needed that we currently cannot even envision?
“Sometimes attaining the deepest familiarity with a question is our best substitute for actually having the answer.” —Brian Greene, The Elegant Universe
“To ask what happens before the Big Bang is a bit like asking what happens on the surface of the earth one mile north of the North Pole. It’s a meaningless question.” —Stephen Hawking, Interview with Timothy Ferris, Pasadena, 1985
1 For more on the topic of the emotional and cognitive similarities between animals and humans, see “Mama’s Last Hug: Animal Emotions and What They Tell Us about Ourselves” by primatologist Frans de Waal, W. W. Norton & Company (2019). https://www.amazon.com/dp/B07DP6MM92 .
References G.F.R. Ellis, Issues in the Philosophy of Cosmology, Philosophy of Physics (Handbook of the Philosophy of Science), Ed. J. Butterfield and J. Earman (Elsevier, 2006), 1183-1285. [http://arxiv.org/abs/astro-ph/0602280]
It is time to put an end to right-turn-on-red. It unnecessarily puts pedestrians and bicyclists trying to cross at crosswalks in harm’s way. I’m old enough to remember driving when a red light meant stop—and stay stopped—always. I’ve never liked right-turn-on-red. During my 21 years working at the Iowa Department of Transportation, I learned that doing whatever we can to minimize the potential for driver confusion or uncertainty will always improve safety.
Massachusetts was the last state to adopt right-turn-on-red, on January 1, 1980. New York City still bans right-turn-on-red, unless a sign indicates otherwise. That should be the norm, not the exception.
Short of an outright ban, a good approach would be to put up signs at major intersections with crosswalks, as shown below, but I would add “or bicyclists” as bicyclists often must use pedestrian crosswalks when it is not safe to ride in the street.
The most dangerous situation occurs when a pedestrian (or bicyclist) is waiting for the crosswalk signal to turn from “Don’t Walk” to “Walk”, and a driver who will be crossing the pedestrian’s crosswalk is stopped at a red light. The driver is eager to make a right turn on red and can’t really see when your crosswalk signal turns to walk, so they may turn right in front of you at the same time you are (legally) starting to cross the intersection. This is even more dangerous for bicyclists because they move faster into the intersection than a pedestrian does. This situation is illustrated in the diagram below.
Here in Dodgeville, Wisconsin, a particularly dangerous location for pedestrians and bicyclists is the south-to-north crosswalk at the SW corner of the intersection of Bequette and US 18, where drivers frequently make right turns from US 18 EB to Bequette SB. Right turns should be prohibited here with a sign that says No Turn on Red When Pedestrians or Bicyclists Present.
If you’re an astronomy teacher that likes to put a trick question on an open book quiz or test once in a while to encourage your students to think more deeply, here’s a good one for you:
On average, what planet is closest to the Earth?
The correct answer is C. Mercury.
Huh? Venus comes closest to the Earth, doesn’t it? Yes, but there is a big difference between minimum distance and average distance. Let’s do some quick calculations to help us understand minimum distance first, and then we’ll discuss the more involved determination of average distance.
Here’s some easily-found data on the terrestrial planets:
I’ve intentionally left the last two columns of the table empty. We’ll come back to those in a moment. a is the semi-major axis of each planet’s orbit around the Sun, in astronomical units (AU). It is often taken that this is the planet’s average distance from the Sun, but that is strictly true only for a circular orbit.1e is the orbital eccentricity, which is a unitless number. The closer the value is to 0.0, the more circular the orbit. The closer the value is to 1.0, the more elliptical the orbit, with 1.0 being a parabola.
The two empty columns are for q the perihelion distance, and Q the aphelion distance. Perihelion occurs when the planet is closest to the Sun. Aphelion occurs when the planet is farthest from the Sun. How do we calculate the perihelion and aphelion distance? It’s easy.
Perihelion: q = a (1 – e)
Aphelion: q = a (1 + e)
Now, let’s fill in the rest of our table.
Ignoring, for a moment, each planet’s orbital eccentricity, we can calculate the “average” closest approach distance between any two planets by simply taking the difference in their semi-major axes. For Venus, it is 1.000 – 0.723 = 0.277 AU, and for Mars, it is 1.524 – 1.000 = 0.524 AU. We see that Venus comes closest to the Earth.
But, sometimes, Venus and Mars come even closer to the Earth than 0.277 AU and 0.524 AU, respectively. The minimum minimum distance between Venus and the Earth in conjunction should occur when Venus is at aphelion at the same time as Earth is at perihelion: 0.983 – 0.728 = 0.255 AU. The minimum minimum distance between Earth and Mars at opposition should occur when Mars is at perihelion and Earth is at aphelion: 1.382 – 1.017 = 0.365 AU. Mars does not ever come as close to the Earth as Venus does at every close approach.
The above assumes that all the terrestrial planets orbit in the same plane, which they do not. Mercury has an orbital inclination relative to the ecliptic of 7.004˚, Venus 3.395˚, Earth 0.000˚ (by definition), and Mars 1.848˚. Calculating the distances in 3D will change the values a little, but not by much.
Now let’s switch gears and find the average distance over time between Earth and the other terrestrial planets—a very different question. But we want to pick a time period to average over that is sufficiently long enough that each planet spends as much time on the opposite side of the Sun from us as it does on our side of the Sun. The time interval between successive conjunctions (in the case of Mercury and Venus) or oppositions (Mars) is called the synodic period and is calculated as follows:
P1 = 87.9691d = orbital period of Mercury
P2 = 224.701d = orbital period of Venus
P3 = 365.256d = orbital period of Earth
P4 = 686.971d = orbital period of Mars
S1 = (P1-1 – P3-1)-1 = synodic period of Mercury = 115.877d
S2 = (P2-1 – P3-1)-1 = synodic period of Venus = 583.924d
S4 = (P3-1 – P4-1)-1 = synodic period of Mars = 779.946d
I wrote a quick little SAS program to numerically determine that an interval of 9,387 days (25.7 years) would be a good choice, because
9387 / 115.877 = 81.0083, for Mercury
9387 / 583.924 = 16.0757, for Venus
9387 / 779.946 = 12.0354, for Mars
The U.S Naval Observatory provides a free computer program called the Multiyear Interactive Computer Almanac (MICA), so I was able to quickly generate a file for each planet, Mercury, Venus, and Mars, giving the Earth-to-planet distance for 9,387 days beginning 0h UT 1 May 2019 through 0h UT 10 Jan 2045. Here are the results:
As you can see, averaged over time, Mercury is the nearest planet to the Earth!
For a more mathematical treatment, see the article in the 12 Mar 2019 issue of Physics Today.
American composer George Gershwin left us much too soon at the young age of 38. He died of a brain tumor in 1937, and eight years after his death a somewhat fictionalized movie about his life was released in 1945, Rhapsody in Blue.
Robert Alda (father of Alan Alda) turns in a great performance as George Gershwin, as does Joan Leslie as his fictionalized love interest Julie Adams.
Strong performances were also turned in by Morris Carnovsky as George Gershwin’s father, Albert Bassermann as his fictionalized teacher Professor Franck (perhaps patterned in part after both Charles Hambitzer and Rubin Goldmark), and Herbert Rudley as Ira Gershwin.
And then there’s the wonderful music of George Gershwin throughout the film, including much of An American in Paris, a personal favorite of mine. I’ll bet you’ll hear familiar songs that you didn’t even know were written by Gershwin!
I loved this movie. Unfortunately, it is not available through either Netflix or Amazon streaming, but you can purchase a high-quality DVD for $12.99 from Warner Brothers.
Of the 793,918 asteroids and trans-Neptunian objects (TNOs) currently catalogued, only 98 are in retrograde orbits around the Sun. That’s just 0.01%.
By “retrograde” we mean that the object orbits the Sun in the opposite sense of all the major planets: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune. From a vantage point above the north pole of the Earth, all of the major planets orbit in a counterclockwise direction around the Sun.
But a retrograde object would be seen to orbit in a clockwise direction around the Sun, as is shown in the animation below for Jupiter retrograde co-orbital asteroid 514107 (2015 BZ509), with respect to Jupiter and its two “clouds” of trojan asteroids.
Of these 98 retrograde objects, only 14 have orbits well-enough determined to have received a minor planet number, and only one has yet received an official name (20461 Dioretsa).
Semimajor Axis (a) between…
Number of Retrograde Minor Planets
Mars – Jupiter
Jupiter – Saturn*
Saturn – Uranus*
Uranus – Neptune*
*asteroids between the orbits of Jupiter and Neptune are often referred to as centaurs
At least some of these objects may be captured interstellar objects.
Let’s now take a look at some of these 98 retrograde objects in greater detail.
20461 Dioretsa The first retrograde asteroid to be discovered was 20461 Dioretsa, in 1999. The only named retrograde asteroid to date, Dioretsa is an anadrome of the word “asteroid”. It is a centaur in a highly eccentric orbit (0.90), ranging between the orbits of Mars and Jupiter out to beyond the orbit of Neptune. Objects in cometlike orbits that show no evidence of cometary activity are often referred to as damocloids. Dioretsa is both a centaur and a damocloid. Its orbital inclination (relative to the ecliptic) is 160°, which is a 20° tilt from an anti-ecliptic orbit. It takes nearly 117 years to orbit the Sun once. It is a dark object with a reflectivity only around 3% and is estimated to be about 9 miles across.
2010 EQ169 This retrograde asteroid holds the distinction (at least temporarily) of being the most highly-inclined main-belt asteroid (91.6°), relative to the ecliptic plane. It is also the retrograde asteroid with the smallest semimajor axis (2.05 AU) and lowest orbital eccentricity (0.10). Unfortunately, it was discovered after the fact by analyzing past data from the Wide-field Infrared Survey Explorer (WISE) space telescope, and has not been seen since. We have only a three-day arc of 17 astrometric observations of 2010 EQ169 between March 7-9, 2010 from which to determine its orbit. Nominally, 2010 EQ169 orbits the Sun at nearly a right angle to the ecliptic plane once every 2.9 years, between the orbits of Mars and Jupiter. However, our knowledge of its orbit is extremely uncertain, as shown below, and it has been lost. Our only hope will be to back-calculate the positions of future asteroids discovered to these dates to see if it matches the WISE positions.
Semimajor Axis (a)
Orbital Eccentricity (e)
Orbital Period (P)
2013 BL76 This retrograde TNO has the largest known semi-major axis of any of the retrograde non-cometary objects: 966.4274 ± 2.2149 AU. In a highly eccentric cometlike orbit (e = 0.99135), its perihelion is in the realm of the centaurs between the orbits of Jupiter and Saturn (8.35 AU), and its aphelion is way out around 1,924 AU. It takes about 30,000 years to orbit the Sun. Its orbit is inclined 98.6° with respect to the ecliptic.
2013 LA2 This retrograde centaur is in an orbit closest to the ecliptic plane (i = 175.2°), tilted 4.8° with respect to the ecliptic. It orbits the Sun about once every 21 years between the orbits of Mars and Uranus.
2017 UX51 The distinction for this retrograde TNO is that it has the highest orbital eccentricity of any non-cometary solar system object (e = 0.9967). Or is it an old inactive comet? 2017 UX51 orbits the Sun every 7,419 ± 2,883 years as close in as between the orbits of Earth and Mars (perihelion q = 1.24 AU)—classifying it as an Amor object—out to far beyond the orbit of Neptune (aphelion Q = 759.54 ± 196.77 AU). Its orbital inclination is 108.2°.
343158 (2009 HC82) An Apollo asteroid, 343158 is the only known retrograde near-Earth asteroid (NEA), with an orbital inclination of 154.4°. It orbits the Sun every 4.0 years, between 0.49 AU (almost as close in as the aphelion of Mercury) out to 4.57 AU (between the orbits of Mars and Jupiter).
References Conover, E., 2017. Science News, 191, 9, 5.