This is a test blog to ensure the capabilities are functioning correctly. My name is Erich (I’m named for my dad, Richard: as I’m the oldest child I’m the “heir of Rick”). Here’s a photo of me.
Astrobiology has long relied on the concept of a “habitable zone”, that is a zone around a star that is the right distance from said star to hold liquid water, and therefore life. This concept is absolutely valuable, especially insofar as it allows us to classify new exoplanets and identify potential exoplanets that may host life. However, we don’t have to leave the solar system to realize the limits of this concept, and how it potentially forecloses the search for life in many other environments.
The first key limitation of the habitable zone is that it assumes heat originates solely from the central star. By looking at the Galilean moons, we can immediately see that that assumption is flawed. Europa and Ganymede both are far outside the habitable zone, and both have liquid water beneath their surfaces. How? Tidal heating. As the moons pass by each other, and orbit Jupiter, differences in tidal forces heat the moons to the extent that liquid water becomes possible. This is critical since these moons are among the best possible candidates for life within our solar system, with the conditions of the subsurface oceans being reminiscent of what we think Earth looked like around the start of life. The existence of liquid water on these moons indicates that a focus on habitable zones may preclude an examination of all possible bodies on which life may exist
The second key limitation of the concept of the habitable zone is the assumption that liquid water is a necessary prerequisite to life. While this certainly maps to our understanding of life on Earth, it is theoretically possible that life could exist using methane or some other compound as the key ingredient. This is important, since methane can exist in liquid form far outside the bounds of the “habitable zone”. One example of a body where this life could exist is Saturn’s moon Titan, where while it is quite cold, there is a significant amount of liquid methane. Whether or not life could exist in an environment like this is an open question, but these solar bodies provide key challenges to the concept of the habitable zone.
While life likely won’t be found on any of these bodies (nor Enceladus, a moon of Saturn with liquid water and organic compounds), they provide important theoretical challenges to the concept of where we think life can exist. As we seek life elsewhere, we should consider the habitable zone as important, but remember that astronomy can be very diverse, and a larger variety of environments may support life than we initially expect.
Space junk is a potential threat to human space exploration. In the frictionless vacuum of space, even a small particulate left behind by a past voyage can become deadly, fracturing seals and damaging the integrity of any spacecraft as it travels at extremely high speeds. So far, we’ve been relatively lucky—the sheer size of space enables us to mostly not worry about it yet. That said, certain highly contested orbits, such as geostationary orbits, may be at risk from space debris as development of space continues. As such, it is imperative that a solution be found to clear space and allow easy access to space. A few technological solutions exist that may be useful. First, there is the potential to use lasers to steer debris out of the path of orbits or vaporize it to the point it’s harmless. Nets may be used to capture debris, and harmlessly decelerate it out of orbit. However, more important than junk removal is junk tracking. The ability to tell where debris is and isn’t is invaluable, as it permits avoidance much more cheaply than the high-tech strategies. NASA and the DOD share responsibility for tracking every object in the night sky larger than a softball and are working to ensure space flights don’t aggravate the problem further. While space debris may not be a major issue yet, it’s critical we keep our eye on the problem.
How do we know how old the Earth is? The age of the Earth was a relatively contested figure for a while, with early scientists struggling to date it with any level of precision. The first and most-well known way to set a lower bound on the age of the earth is through radioactive dating. Simply, find as many rocks as possible, look at radioactive isotopes within those rocks, and compare the quantity of radioactive source to the quantity of members of its decay chain. This method is useful for establishing a lower bound for age, since in theory, the rocks must have formed after earth did. Whichever rock we can find that is dated to the furthest past date is the lower bound for the age of the earth. There are a few problems with this. First, radioactive decay can be unreliable, especially as gases and such are able to escape. Second, it’s not able to easily establish an upper bound, since the rocks formed after earth did. Third, it is possible rocks may have been transferred to earth from asteroids or some other cosmic event, thereby making questionable some of the assumptions of the process. Beyond dating earth alone, scientists must turn outwards and examine both nearby and far away systems. By dating nearby systems such as the moon and mars, it is possible to ascertain that they formed around 4.5 billion years ago, similar to estimates from the earth. Similarly, by examining planets in different stages of their development, it is possible for scientists to determine how long the life cycle can take for similar planets. In this way, astronomy can lend significant insights to geology and the study of our own planetary formation.
While we discussed in class the importance of blackbody spectra continuous spectra, there is an important historical footnote in understanding where the famous blackbody curve arises from. Physics in the late 1800’s and early 1900’s predicted the wavelength-intensity relationship to be I α 1/λ^4, using a derivation based on classical statistical mechanics. This relationship closely matches the observed blackbody electromagnetic radiation for high wavelengths, but is not accurate for lower wavelengths, particularly diverging in the ultraviolet spectrum. This divergence, that intensity ought to be going to infinity as wavelength goes to zero, came to be known as the ultraviolet catastrophe. The solution to this catastrophe was formulated in the early 1900’s by Max Planck. He started from the assumption that there was a minimal amount that energy could change by while emitting blackbody radiation. With this assumption, he was able to match the observed data far more accurately, creating Planck’s law based on strictly empirical data.
This assumption of a minimal energy change became the foundation of quantum mechanics. Planck’s insight that energy was quantized was later justified by Einstein through his analysis of the photoelectric effect. This empirical guess that the relationship could match a quantized output level turned out to be a precursor of the existence of photons. The way the blackbody curve looks is fundamentally due to quantum mechanics, and the historical importance of it is hard to overstate.
It is a nearly universal maxim of science fiction that faster than light (FTL) travel must exist. Let us take a look at why this is universally necessary for the sake of a good story by comparing the size and scope of both our real universe and a few fictional universes to how long traversal would take. However, rather than just using years, let us instead consider how many generations it would take for people to travel that distance. Given that one generation is ~30 years, a few lengths of time exist for scale. Human civilization has been around for 6,000 years, or about 200 generations. Jesus Christ was born around 70 generations ago. Modern humans have existed for around 7,000 generations.
Meanwhile, if we assume that we eventually invent (very very near to) light speed travel, it would take about 3,000 generations to traverse the Milky Way. Even though the effects of time dilation would mean that the experienced time would be considerably more negligible, it would be best if you didn’t have an urgent engagement on the other side of the galaxy–it would have gone through effectively half of humanity by the time you reached it. Going to another galaxy is even worse–the nearest galaxy to Earth (Andromeda) is 83,000 generations away. By the time a spaceship has traveled that distance, the galaxy far, far, away will bear little resemblance to how it was long, long ago.
Fictional universes face the same issue. Star Trek takes place in our own galaxy, so faces the same scale issues. Battlestar Galactica takes place in an area of space around 13,000 light years across, or about 400 generations. Star Wars, while localized to one galaxy, is purported to be 120,000 light years across (~4,000 generations), slightly larger than the size of our own galaxy. Dune reaches into several galaxies with no clear size limit, but conservative estimates mean it will be in the tens of thousands of generations. Simply put, for the sake of a coherent story in science fiction, both heroes and villains must be able to travel faster than light. If they were constrained to only being as fast as the fastest thing, a plot would simply be impossible.