DIVINE GENESIS
Exploring Creation through Astronomy and Biology
Introduction
Scientists who advocate for the theory of evolution often regard creationism as lacking empirical support and scientific rigor. They contend that creationism should not be included in science curricula, as it fails to offer a scientifically substantiated explanation for the diversity and complexity of life on Earth.
On the other hand, evolutionary theory contains gaps and unanswered questions, particularly regarding the origin of life and the complexity of biological systems. Natural selection and mutations are insufficient to explain the intricate structures and functions observed in living organisms. Furthermore, evolutionary theory applies only to existing living organisms and does not address the origin of life. Additionally, it relies heavily on assumptions and speculative reconstructions, thereby challenging its validity as a comprehensive explanation for the diversity of life.
This book is written to explore the debate between creation and evolution by discussing the creation of the universe, the uniqueness of the Earth, and the origin of life.
In the first part, we will introduce the hierarchical structure of the universe and discuss the creation of the universe as revealed by astronomical observations. Then, we will exam whether the creation of the universe described in the Bible aligns with the astronomical findings, whether the Earth's age is 6,000 years, and take a closer look at the fine-tuned nature of the universe.
The second part presents ten amazing facts about the Earth, emphasizing its unique suitability for supporting life and pointing to evidence of purposeful design.
In the third part, the origin of life is explored, challenging conventional evolutionary theories and highlighting the complexity of biological systems as evidence for divine creation. The adequacy of the term "Darwin's theory of evolution" is examined, followed by an investigation into whether humans evolved from apes. Additionally, the concept of intelligent design is introduced, and creationism is explored through discussions on particle physics, the existence of extraterrestrial life, the instincts of animals, and the mathematics found in nature.
The book concludes with a heartfelt invitation to faith, encouraging readers to reflect on their spiritual journey and consider the transformative power of belief. It introduces the gospel and provides practical guidance on how to embrace faith, including steps to understand and receive eternal life, offering hope and assurance for those seeking a deeper connection with God.
I hope this book provides renewed knowledge of creation, deepening your understanding of the intricate design and purpose woven into the universe, and offers an opportunity to meditate on the boundless grace, wisdom, and power of God, the divine Creator, who sustains all things and invites us to marvel at His handiwork.
1. The Creation of the Universe
As a child, you may recall nights spent camping in the countryside or high in the mountains, gazing at countless stars shimmering in the vast expanse above, or marveling at shooting stars streaking gracefully across the dark sky. Such experiences often fill us with awe and wonder, a profound appreciation for the immense beauty and scale of the universe. In those moments, you might have felt a deep connection to the cosmos, accompanied by a sense of humility about your place within it. Questions may have stirred in your mind: How many stars fill the sky? Could there be life beyond our world? How did the universe begin, and how might it end? Who created it all? The breathtaking beauty and enigmatic nature of the night sky spark curiosity, inviting reflection on the origins of the universe and our purpose within it. These moments of fascination leave an enduring imprint, inspiring us to seek answers to life’s greatest mysteries.
In this chapter, we will explore the origin of the universe from both astronomical and Biblical perspectives. We will provide scientific support for the creation record in Genesis by comparing these two viewpoints. Additionally, we will examine which was created first, the Earth or the Sun, whether the Earth is 6,000 years old, and the concept of a fine-tuned universe.
a. The Hierarchical Structure of the Universe
To discuss the origin of the universe, let's first explore its hierarchical structure. We will start with our solar system and move on to Galaxy, external galaxies, cluster of galaxies, superclusters, and supercluster complexes.
i. The Solar System
The solar system consists of a star called the Sun, eight planets orbiting it, the asteroid belt between Mars and Jupiter, the Kuiper Belt, and the outermost member, the Oort Cloud. A star is defined as a self-luminous celestial body powered by nuclear fusion, while a planet is a celestial body that reflects light from a star.
Earth is the third planet from the Sun. The distance from Earth to the Moon is 384,000 km, taking 16 days by airplane at 1,000 km/h. The distance from Earth to the Sun is about 150 million kilometers, or one astronomical unit (AU), which would take 17 years by airplane. The distance to Neptune is 30 AU, the Kuiper Belt is 30 to 50 AU, and the Oort Cloud is 2,000 to 200,000 AU. At the speed of light, it would take 8.3 minutes to travel from Earth to the Sun, 4 hours to Neptune, and 9.5 months (0.79 light-years) to reach the inner edge of the Oort Cloud. By airplane, it would take about 850,000 years.
Fig. 1.1. Solar system including the Kuiper Belt and Oort Cloud
Comets can be classified as short-period and long-period comets. The Kuiper Belt is the source of short-period comets, and the Oort Cloud is the source of long-period comets. Due to their origins, comets have highly elliptical orbits with large eccentricities. The Sun is 109 times the size of Earth, 333,000 times its mass, and has a rotation period of about 25 days.
ii. The Stellar System
Upon leaving the Oort Cloud, you enter the realm of stars. The closest star to Earth is Proxima Centauri, which is 14% the size of the Sun, 12% of its mass, and about 4.2 light-years away. Traveling there by plane would take approximately 4.6 million years.
If you closely observe the twinkling stars in the night sky, you'll notice that they have various colors. A star's color depends on its surface temperature: cooler stars appear reddish, while hotter stars are whitish. For example, Betelgeuse (α Ori) is red, the Sun is yellow, and Sirius (α CMa), the brightest star in the night sky, is bluish white.
Fig. 1.2. Stars exhibit a variety of colors
A star’s mass determines its nuclear fusion rate, which in turn governs its luminosity and lifespan. More massive stars consume their fuel faster than less massive stars. Stars end their lives as white dwarfs, neutron stars, or black holes. Stars with core masses less than 1.4 solar masses become white dwarfs, those with core masses between 1.4 and 3 solar masses become neutron stars and explode as supernovae, and those with core masses greater than 3 solar masses become black holes after passing through a neutron star stage. The remnants of supernova explosions can be recycled to form new stars.
Typically, fewer than a hundred stars are visible to the naked eye in a city, and about a thousand in the countryside under ideal conditions. Most of these stars lie within 50 light-years from Earth.
iii. Our Galaxy (Milky Way)
The Milky Way is a barred spiral galaxy containing between 200 and 400 billion stars, along with vast amounts of gas, dust, and dark matter. Its diameter spans approximately 100,000 light-years, while its thickness is about 1,000 light-years, making it a relatively flat and disk-like structure with a central bulge.
The Sun is situated roughly 26,000 light-years from the galactic center, orbiting it once every 220 million years, a period known as a galactic year. Our solar system resides near the Orion Spur, a minor arm located between the Sagittarius and Perseus spiral arms. Positioned about 60 light-years above the galactic plane, this location provides an advantageous perspective for observing the universe in multiple directions with minimal obstruction from the dense dust and gas within the galactic disk.
Fig. 1.3. Our Galaxy (Milky Way)
iv. Galaxies, Cluster of Galaxies, and Superclusters
The Andromeda Galaxy (M31) is the closest galaxy to the Milky Way, located about 2.5 million light-years from Earth. It is visible to the naked eye from the Northern Hemisphere (visual magnitude = 3.4) and has a shape similar to that of the Milky Way. The Andromeda Galaxy is approaching the Milky Way at a speed of about 110 km/s and is expected to collide with it in about 4 billion years.
Galaxies can be broadly categorized into three main morphological classes: spiral, elliptical, and irregular. When two spiral galaxies collide, their gravitational interactions can lead to a dramatic transformation, often resulting in the formation of an elliptical galaxy. This process typically unfolds through stages involving interacting galaxies, followed by a luminous infrared galaxy (LIRG) or ultraluminous infrared galaxy (ULIRG) phase.
Fig. 1.4. Spiral galaxy, elliptical galaxy, and irregular galaxy
If fewer than 50 galaxies are gravitationally bound, they are called a "group of galaxies," and if hundreds or thousands are bound, they are called "clusters of galaxies." More than 40 nearby galaxies, including the Milky Way and Andromeda, belong to the Local Group. The Local Group and the Virgo Cluster are part of the Virgo Supercluster, which in turn is part of the Laniakea Supercluster.
A supercluster complex, also known as a galactic filament or supercluster chain, is an immense large-scale structure in the universe, composed of numerous galaxy superclusters that are interconnected by vast networks of galaxies, gas, and dark matter. These interconnected regions form a web-like pattern and represent the largest structures known to exist in the cosmos. They span incredible distances, ranging from hundreds of millions to billions of light-years across, dwarfing smaller cosmic structures. Among these, the Hercules–Corona Borealis Great Wall stands out as the largest known supercluster complex, an awe-inspiring testament to the scale of the universe. In the observable universe, there are an estimated 200 billion galaxies, spread across a staggering distance of approximately 93 billion light-years, each contributing to the intricate tapestry of cosmic structures.
Fig. 1.5. Nearby superclusters (yellow color: Laniakea supercluster)
b. Creation of the Universe
How did the universe begin? Has it always existed, or was it created by God? To explore this topic, we will examine the origin of the universe as observed in astronomy and as described in the Book of Genesis in the Bible.
i. Creation of the Universe in Astronomy
The most widely supported theory about the origin of the universe is the Big Bang Theory, which posits that the universe began approximately 13.8 billion years ago as an incredibly hot and dense point that rapidly expanded. This naturally raises the intriguing question: "What existed before the Big Bang?" One leading hypothesis asserts with growing support that prior to the Big Bang, the universe existed in a state of quantum fluctuations within a vacuum, a dynamic and probabilistic foundation from which our universe emerged.
Before Paul Dirac, the vacuum was thought of as empty space with nothing in it. In 1928, Dirac combined quantum mechanics and special relativity to describe the behavior of an electron at relativistic speeds. Interestingly, the equation suggested two solutions for the electron: one for an electron with positive energy, and one for an electron with negative energy. Dirac proposed that the vacuum is not an empty space but filled with an infinite number of electrons with negative energy (positron). Because of this, vacuum is sometimes called Dirac Sea.
Fig. 1.6. 3-D model of quantum fluctuations in a vacuum
Although the Dirac Sea appears to be static, it is never static because of Heisenberg's uncertainty principle. Particle and antiparticle pairs spontaneously appear (pair-production) and disappear (pair-annihilation) in a random fashion. The time scale is 10-21 seconds and invisible to the human eye, but if there is a camera that can capture it, it will be like looking at a fluctuating sea. This is what is called "quantum fluctuation." The Big Bang emerged from the sea of quantum fluctuation at a singular point. The Big Bang itself is the beginning of the universe.
Immediately after the Big Bang, the universe underwent rapid changes due to its extremely high temperature and density. From 10-43 seconds (Planck time) to 10-36 seconds, the universe was governed by Grand Unification Theory where three forces (strong, weak, electromagnetic forces) in Standard Model are unified. The universe underwent inflationary epoch from 10-36 seconds to 10-32 seconds, electroweak epoch from 10-32 seconds to 10-12 seconds, quake epoch from 10-12 seconds to 10-6 seconds, hadron epoch from 10-6 seconds to 1 second, and lepton epoch from 1 seconds to 10 seconds.
At the end of the lepton epoch, a dramatic and pivotal event occurred. The lepton and antilepton pairs, primarily consisting of electrons and positrons, underwent mutual annihilation. This process released an immense number of photons (light particles), effectively flooding the universe with light. These photons became the dominant form of energy in the cosmos, marking the beginning of what is known as the photon epoch. This era, lasting from about 10 seconds to 380,000 years after the Big Bang, was characterized by a hot, dense plasma of free electrons, nuclei, and photons. During this time, photons were scattered by free electrons and protons, preventing them from traveling freely and making the universe opaque.
The recombination epoch followed at the end of photon epoch, where another important event happened. Electrons combine with protons to form neutral hydrogen and helium. This is the start of the matter-dominated era. When this happened, the plasma-filled universe gradually became transparent and transformed into space what we can call the sky. When this happens, photons produced during the photon epoch but previously confined by plasma can now move freely around the transparent universe. These freely moving photons are observed as very bright light and form cosmic microwave background radiation.
The stars and galaxies we see today were formed from the atoms created during the recombination epoch. Since then, the universe has continued to expand in the aftermath of the Big Bang. When the universe was 9.8 billion years old, dark energy began to dominate, marking the start of the dark energy-dominated era. In this era, the universe continues to expand at an accelerated rate. This accelerated expansion is the current state of the universe.
ii. The Fate of the Universe (Big Bang Again?)
The fate of the universe depends on its overall density. According to measurements from WMAP, the current density of the universe is approximately equal to the critical density (about 10–29 g cm-3) within a margin of error of 0.5%. However, this uncertainty means we cannot yet definitively determine the universe's ultimate fate until more precise measurements are obtained. If the universe's density is greater than the critical density, gravitational forces will eventually overcome the expansion, causing the universe to collapse back into itself in a catastrophic event known as the Big Crunch, characteristic of a closed universe.
Conversely, if the density is less than the critical density, the universe will continue to expand forever at an accelerated rate, leading to a scenario known as the Big Rip, characteristic of an open universe. In this case, the universe's temperature will gradually cool as expansion progresses, and star formation will eventually cease due to the depletion of the interstellar medium necessary for star creation. Over time, the universe will become increasingly dark and cold, a process often referred to as "heat death."
Existing stars will run out of fuel and stop shining. Subsequently, proton decay follows as predicted by Grand Unified Theory when the age of the universe is around 1032 years. Around 1043 years, black holes will start to evaporate via Hawking radiation. After all baryonic matters have decayed and all black holes have evaporated, the universe will be filled with radiation. The temperature of the universe will cool down to absolute zero and all is dark and empty, resembling the state of the universe undergoing quantum fluctuations before the Big Bang.
Fig. 1.7. Fate of the universe and evaporating black hole
Recently, two cosmic megastructures were discovered 7 billion light-years away from Earth in the direction of the Big Dipper. The Giant Arc, discovered in 2022, and the Big Ring, discovered in 2024, challenge the cosmological principle that states the universe is homogeneous and isotropic on a large scale. These megastructures require a proper explanation. One possible explanation is that they are huge cosmic strings or remnants from the Hawking evaporation of supermassive black holes (Hawking points) from the previous Big Bang.
This interpretation is related to Roger Penrose's Conformal Cyclic Cosmology (CCC). The CCC is a cosmological model based on general relativity, in which the universe expands forever until all matter decays and leaves black holes. In CCC, the universe iterates through infinite cycles, with a new Big Bang emerging within the ever-expanding current Big Bang.
Fig. 1.8. Big Ring (blue) and Big Arc (red)
Personally, I find the CCC appealing because it offers potential solutions to some problems in galaxy evolution. There exists a correlation between the mass of a black hole and the stellar velocity dispersion (the M-sigma relation). According to this relationship, the mass of a black hole is about 0.1% of the mass of its galaxy. Recently, Chandra and JWST discovered an intriguing galaxy, UHZ1, via gravitational lensing. UHZ1 is at a distance of 13.2 billion light-years, seen when our universe was only about 3 percent of its current age. The estimated black hole mass of UHZ1 turned out to be larger than that of the host galaxy. This large black hole mass cannot be explained by current black hole mass theories but can be by the CCC. This can be understood if the black hole in UHZ1 was a recycled black hole from the previous Big Bang and became a seed black hole in UHZ1 during the current Big Bang.
We do not know how the new Big Bang occurs while the current Big Bang is still expanding. We could try using the concept of hyperspace. In this scenario, the universe is expanding into three-dimensional space. However, imagine our three-dimensional universe as a surface embedded in a higher-dimensional space (hyperspace). This higher-dimensional space could be a four-dimensional space (or more) where our entire universe is just a "slice" or a "brane."
As our universe continues to expand, it might eventually converge to a singular point in this higher-dimensional hyperspace, much like how a two-dimensional surface can curve and converge at a point in three-dimensional space. This point in hyperspace could be analogous to the neck of a Klein bottle, a higher-dimensional shape where the surface loops back on itself.
When the universe's expansion in three-dimensional space converges to this singular point in hyperspace, it could create conditions where the energy density becomes extremely high. If this singular point in hyperspace cannot accommodate the immense energy and vacuum energy influx from the current expanding universe, it could result in an explosion. This explosion would be the start of a new Big Bang, creating a new universe.
In this way, the ever-expanding current Big Bang universe could lead to the formation of a new universe within the hyperspace framework, with the convergence to a singular point acting as the bridge between cycles of the CCC. This higher-dimensional convergence provides a mechanism for continuous cycles of the Big Bangs while the current universe is still expanding, and this expanding universe's energy could also contribute to the dark energy driving its acceleration.
Fig. 1.9. Conformal Cyclic Cosmology
iii. The Creation of the Universe in the Bible
In this section, I will explore the creation of the universe as described in the Bible from an astronomical perspective, examining how the Biblical account might align with modern scientific understanding. This analysis will delve into the possible parallels between the scriptural account and astronomical observations. While this approach provides an interesting perspective, it is important to recognize that there are other ways to interpret the creation account in the Bible. These interpretations can vary based on theological, philosophical, and cultural contexts, each providing unique insights into the profound narrative of the universe's origins.
a) God declared the creation of the universe
The creation of the universe is described in Genesis, the first book of the Bible.
“In the beginning, God created the heavens and the Earth. " (Genesis 1:1)
This verse introduces the act of creation by God, asserting that He is the initiator of everything that exists. The phrase "the heavens and the Earth" encompasses all of creation, indicating the totality of the universe.
" The earth was without form and void, and darkness was over the face of the deep. And the Spirit of God was hovering over the face of the waters." (Genesis 1:2)
The term "earth" here represents the physical, material creation (i.e., baryonic matter) that God would later shape. The phrase "The earth was without form" can be interpreted as describing a primordial state of emptiness, in which nothing had yet been created. The term "void" signifies an empty space, and if there is nothing within that space, it can legitimately be called a vacuum. Therefore, the phrase "The earth was without form and void" suggests that, from the very beginning, the universe existed as a vacuum, an initial state of nothingness. The next phrase ‘darkness was over the face of the deep’ has a profound meaning. The ‘darkness’ is חֹשֶׁך (choshek) in Hebrew and means literally total darkness without any light. The ‘deep’ is תְּהוֹם (tehom) in Hebrew and was derived from הום (hom) meaning ‘uproar’ or ‘fluctuate’. Thus, "The Earth was without form and void, and darkness was over the face of the deep" could mean "the universe started from a vacuum that was in a state of darkness and fluctuation." This interpretation matches perfectly with the state of the universe at the start, just before the Big Bang—a vacuum undergoing quantum fluctuations.
b) The creation of light
The main event on the first day of creation is the creation of light.
"And God said, “Let there be light,” and there was light." (Genesis 1:3)
The verse states that God initiated the creation of the universe by creating light. Similarly, the Big Bang began with a series of rapid epochs, which altogether lasted less than a second, ultimately leading to the creation of light (photons) during the photon epoch. The creation of light in Genesis 1:3 could be interpreted as a reference to this photon epoch.
c) The Creation of Sky
The main event on the second day of creation is the creation of sky (heavens).
"And God made the vault and…, God called the vault sky…." (Genesis 1:7, 8)
The creation of the sky described in Genesis can be correlated with the recombination epoch in Big Bang cosmology. Before this epoch, the universe was opaque, filled with a dense, hot plasma of electrons, neutrons, protons, and photons. This plasma scattered photons, preventing them from traveling freely and making the universe opaque to radiation. During this time, the universe was about 10 light-years across, meaning there was no clear space for a visible "sky."
However, in the recombination epoch, the universe cooled sufficiently for electrons and protons to combine and form neutral hydrogen atoms. This process cleared the plasma, making the universe transparent and allowing photons to travel freely through space. As a result, a vast, transparent expanse—what we recognize as the visible sky—came into existence, with a radius of about 42 million light-years. Thus, the creation of the sky in Genesis 1:7-8 could be interpreted as a reference to this pivotal event in cosmic history.
The following table summarizes the creation of the universe as described in the Bible and as explained by astronomy. The comparison in the table suggests that the account of creation in Genesis aligns remarkably well with the findings of astronomy!
Genesis | Astronomy |
Vacuum fluctuation (Gen 1:2 – before Creation) | Vacuum fluctuation (before Big Bang) |
Creation of light (Gen 1:3 – Creation Day 1) | Creation of light (Photon epoch) |
Creation of the sky (Gen 1:7-8 – Creation Day 2) | Creation of the sky (Recombination epoch) |
Table 1.1. Comparison of Creation in Genesis and Astronomy
c. Which was Created First, the Earth or the Sun?
The main event on the third day of creation in Genesis is the creation of dry land and sea. This can be understood as the period during which the Earth was formed and structured. The process of gathering water and revealing dry land signifies the development of the Earth's surface and geographical features. The main event on the fourth day in Genesis is the creation of the Sun. Thus, the Earth was created before the Sun. It will be interesting to exam whether the Biblical account is consistent with astronomical observations. Let's explore it.
Stars and planets are formed from molecular clouds. Molecular clouds are made up of about 98% gas (about 70% hydrogen and 28% helium) and 2% dust (carbon, nitrogen, oxygen, iron, etc.). Most of the stars and Jovian planets are made of gas, and most of the terrestrial planets are made of dust. Protostars are formed when molecular clouds collapse under their own gravity. During this process, the remaining material from the molecular clouds forms a rotating disk known as a protoplanetary disk, which is the region where planets eventually take shape. The gravitational collapse initiates the heating and compression of the core, leading to the birth of a protostar, while the surrounding spinning disk provides the environment for the formation and evolution of planetary bodies.
As the protostar continues to contract, it becomes a pre-main-sequence star and follows the stellar evolution tracks known as the Hayashi track (for low-mass stars) and the Henyey track (for high-mass stars) in the Hertzsprung-Russell diagram (H-R diagram). The pre-main sequence stars can be observed as T Tauri stars if their mass is smaller than 2 solar masses, and as Herbig Ae/Be stars if their mass is larger than 2 solar masses. The pre-main sequence star continues to contract until its internal temperature rises to 10 to 20 million degrees. At this point, the pre-main sequence star starts hydrogen nuclear fusion and becomes a true star in the sky. Stars in this stage are called main sequence stars.
According to stellar evolution theory and helioseismology studies, the Sun stayed in the pre-main sequence stage for about 40 to 50 million years, after which it became a main sequence star.
Fig. 1.10. Protostar and protoplanetary disk, and H-R diagram
While the star is forming in the center, planets are forming in the protoplanetary disk. Collisions of dust particles and gas form pebbles, pebbles grow into rocks, and rocks develop into planetesimals. These planetesimals are building blocks of planets.
Only recently have the details of the planet formation process in the protoplanetary disk been actively studied. Studies predict that it will take a few million years to form an Earth-sized planet from 1 mm-sized pebbles. This prediction can be tested with actual observations, including ALMA sub-millimeter images of T Tauri stars HL Tau and PDS 70.
The mass of HL Tau is approximately two solar masses, and its age is about one million years. The image reveals that several planets have already formed and are orbiting the central pre-main sequence star, as indicated by the gaps in the protoplanetary disk. The mass of PDS 70 is about 0.76 solar masses, and its age is about 5.4 million years old. Two exoplanets, PDS 70b and PDS 70c have been directly imaged by ESO VLT. In 2023, spectroscopic observations by the James Webb Space Telescope detected water in the terrestrial planet-forming region of the protoplanetary disk and suggested that two or more terrestrial planets have formed inside. It is important to note that the gas and dust clouds seen in HL Tau were largely removed in PDS 70, and terrestrial planets containing water have formed in the center.
It took 5.4 million years to form terrestrial planets, but even if it took 10 million years, it would still be much less than the 40 million to 50 million years for the Sun to become a main sequence star. This suggests that the Earth was created earlier than the Sun, as stated in Genesis, and is consistent with astronomical observations.
Fig. 1.11. HL Tau and PDS 70
Another main event God performed on the third day was the creation of plants and trees. Atheists and evolutionists often ask how these plants and trees could have survived if the Sun was created on the fourth day. This question can be addressed within the context of stellar evolution theory. When the Earth was formed, the Sun was still in the T Tauri star stage. Although T Tauri stars are not main-sequence stars, their surface temperature ranges between 4,000 to 5,000 Kelvin. Blackbody radiation at these temperature peaks in the visible wavelength. Furthermore, the size of the Sun as a T Tauri star was several times larger than its current size. Therefore, it could provide sufficient energy in the visible wavelength range to enable photosynthesis in plants and trees.
d. Is the Earth 6,000 Years Old?
The ‘young Earth creationism’ is the belief that the Earth and the universe are relatively young, typically around 6,000 to 10,000 years old, based on a literal interpretation of the Bible's creation account in Genesis. Young Earth creationists believe that the Earth was created in six 24-hour days and reject much of modern scientific consensus regarding the age of the Earth and the universe. Extensive scientific evidence from various fields, including geology, astronomy, and physics, indicates that the Earth is approximately 4.6 billion years old, and the universe is about 13.8 billion years old. Despite this ample evidence, the young Earth creationists do not agree. This situation is reminiscent of the debate between geocentric and heliocentric models in the days of Galileo Galilei.
Before delving into the main discussion, let's consider a few examples that make it easy to understand that the Earth and the universe are at least several million years old.
The Earth's crust is composed of tectonic plates that move slowly, causing earthquakes. No one would deny this fact. A hot spot is a point where magma flows out from deep within the mantle beneath the crust, with its center fixed in place. When magma flows out onto the crust and cools, it forms land. The Hawaiian Islands are a prime example of this process. On the Big Island of Hawaii, Kilauea is still an active volcano, and as the magma it erupts cools in the seawater, new land is formed. The newly formed land moves northwest at a rate of about 7-10 cm per year due to plate tectonics, and this process has created the various islands of Hawaii. This is happening even now, and it is an undeniable fact.
Considering the speed at which the tectonic plates move, the ages of the Hawaiian Islands are estimated as follows: the Big Island is 400,000 years old, Maui is 1 million years old, Molokai is 1.5-2 million years old, Oahu (where Waikiki is located) is 3-4 million years old, and Kauai is about 5 million years old. In the Big Island, one can see that much of the land is still covered in black volcanic soil, indicating minimal weathering. In contrast, Kauai has undergone significant weathering, allowing vegetation to flourish, earning it the nickname "The Garden Isle." This example provides direct evidence that the Earth is at least several million years old.
Fig. 1.12. Geologic history of the Hawaiian Islands
To directly understand that the universe is at least several million years old, one only needs to accept that light travels at 300,000 km per second. The Sun is 150 million km away from Earth. So, the sunlight we receive now was generated on the Sun 8.3 minutes ago. The Sun is about 400 times larger than the Moon, but because it is much farther away, it appears to be about the same size as the Moon in the sky. No one would deny this. The Andromeda Galaxy is similar in size to our Milky Way but is 2.5 million light-years away, making it appear about four times the size of the Moon. The fact that we can see the Andromeda Galaxy means that the light we are observing created in Andromeda 2.5 million years ago and has just now reached us. If you have seen the Andromeda Galaxy, you cannot deny this fact. This is direct evidence that the universe is at least several million years old.
Despite these facts, if one still insists that the Earth is 6,000 years old, it could become a stumbling block rather than aid in spreading the gospel, potentially distancing many people from it. Therefore, instead of advocating for young Earth creationism, it might be more reasonable to carefully read Genesis in the Bible and try to find a solution.
For humans, time always flows from the present to the future and never flows backward. We define one day as 24 hours, but if we were created on other planets, a day would not be 24 hours. For example, if we were created on Venus, one day would be 243 Earth days, and on Jupiter, one day would be 10 Earth hours. Therefore, unless we change our definition and perception of time from a geocentric perspective, it will be difficult to address this issue. Let’s discuss this further with these facts in mind.
i. The Days in Genesis
First, let's estimate the age of the universe based on the records in Genesis. According to Genesis, God created the universe and everything in it over six days. The time elapsed from Adam to Noah can be estimated using the genealogical records in Genesis 5:3–32. Noah’s flood occurred when Noah was 600 years old, and the total number of years from Adam to the flood is 1,656 years. We do not know when Noah’s flood occurred. Some biblical scholars and traditions attempt to date the flood using genealogies in the Bible, estimating it occurred around 2300–2400 BC. Therefore, the age of the universe, according to this interpretation, is 7 days + 1,656 years + 4,400 years = 6,056 years. This is the theoretical basis of young Earth creationists' claim that the Earth is 6,000 years old.
To address the day-age problem, let's take another look at Genesis. While there seems to be no issues with the genealogical records in Genesis, some debate might exist regarding the exact year of Noah’s flood. However, whether Noah's flood occurred 4,400 years ago or 44,000 years ago, it does not significantly affect the age of the universe as understood in the scientific context of 13.8 billion years. So, where is the key to resolving the day-age problem? Perhaps you have already noticed—the key lies in the interpretation of the first seven days of creation.
Fig. 1.13. To define a day, the Earth and Sun must exist beforehand.
The reason is simple: a day is defined as the rotation period of the planet we live on. To define a day, both the Sun and the Earth must exist beforehand. However, Genesis records that the Earth was created on the third day, and the Sun on the fourth day, yet God used the terms "day" and "night" even before their creation. This implies that the "day" in Genesis is not the 24-hour day as we define it, but a "day" as defined by God. The fallacy of young Earth creationists lies in their misunderstanding that the "day" mentioned in Genesis refers to a literal 24-hour human day, leading to a misinterpretation of the term "day" in the Genesis account.
If the days in Genesis are not the 24-hour periods as defined by humans, you might wonder "How long are the days in Genesis in terms of human days?". While we do not know the exact answer, we can estimate an approximate period by comparing the creation events described in Genesis with those of the Big Bang.
The main event on the first day of creation is the creation of light. The photon epoch in the Big Bang corresponds to this event, with the human time of the first day being 380,000 years. The main event on the second day of creation is the creation of the sky. The recombination epoch corresponds to this event, with the human time of the second day being 100,000 years. The main event on the third day is the creation of the Earth. Given that the Earth is 4.6 billion years old, the corresponding human time for the third day is less than 9.2 billion years (13.8 billion years minus 4.6 billion years). The main event on the fourth day is the creation of the Sun. Since the Sun was created about 30 to 40 million years after Earth, the human time for the fourth day is larger than 30 million years. The following table summarizes the above results.
Day in Creation | Event in Genesis | Event in Astronomy | Human Time |
---|---|---|---|
Day 1 | Creation of the light | Creation of the light in photon epoch | 380,000 years |
Day 2 | Creation of the sky | Creation of the sky in recombination epoch | 100,000 years |
Day 3 | Creation of the Earth | Creation of the Earth | < 9.2 billion years |
Day 4 | Creation of the Sun | Creation of the Sun | > 30 million years |
Table 1.2. Days of Creation in Genesis Interpreted in Human Time
Here, we notice some unexpected facts about the concept of time as used by God. The days in the creation account are much longer compared to a human day of 24 hours. Furthermore, God's time is not fixed but varies, ranging from hundreds of thousands of years to as long as billions of years. How can we understand this? In some sense, this is not a surprising result but an expected one.
ii. The Creator of Time
The "day" used in Genesis is yom (יום) in Hebrew. Yom can be interpreted in several ways, including one that refers to age or a long period of time. This interpretation suggests that each "day" of creation represents a lengthy epoch during which specific acts of creation took place. Another interpretation is that "yom" signifies a period of indeterminate length. This view posits that God's days are not bound by human time constraints, acknowledging that God, as the creator of time, operates outside of our temporal limitations. Examples of this interpretation can be found in the Bible.
In 2 Peter in the New Testament, it is written:
"But do not forget this one thing, dear friends: With the Lord a day is like a thousand years, and a thousand years are like a day." (2 Peter 3:8)
This passage is meant to encourage those who wait for God's promises to do so patiently. It may also suggest that God's perspective on time differs from that of humans, implying that God can expand or contract time as He wills. We understand that time is not a fixed quantity. According to special relativity, time moves more slowly for the moving observer than the observer at rest in the same inertial frame (). In general relativity, time passes more slowly in a strong gravitational field (
).
Fig. 1.14. Illustration of time dilation
God not only expand or contract but also stop time. In the Old Testament book of Joshua, it is written:
"The Sun stopped in the middle of the sky and delayed going down about a full day” (Joshua 10:13).
This miracle occurred during Joshua's battle with the Amorites and demonstrates that God has the power to freeze time. Furthermore, God performed an even more astonishing miracle, as recorded in 2 Kings of the Old Testament:
"Then the prophet Isaiah called on the LORD, and the LORD made the shadow go back the ten steps it had gone down on the stairway of Ahaz.” (2 Kings 20:11)
The verse above reflects God’s response to King Hezekiah’s tearful prayer for a longer life. In His mercy, God heard Hezekiah and granted him 15 additional years. To confirm His promise, God performed a miraculous sign, causing the shadow on the stairway of Ahaz (sundial) to move backward by ten steps. This miracle indicates that God has the power to reverse time, a concept that is beyond the scope of our current scientific understanding.
Fig. 1.15. Stairway of Ahaz (Sundial)
For humans, time flows unidirectionally from present to future, but for God, as shown in the Bible, time is a variable He can control. God can shorten, extend, freeze, or even reverse time, demonstrating His sovereignty over natural laws and highlighting the contrast between human limitations and His infinite power.
e. The Fine-tuned Universe
The fine-tuned universe expresses the fact that the fundamental physical constants that make up and operate the universe are finely turned with extreme precision for life to exist in the universe.
If the density of the universe had been greater than the critical density, the universe would have contracted immediately after its formation. Conversely, if it had been smaller than the critical density, the universe would have expanded too rapidly, preventing the formation of stars and galaxies. In either case, we would not exist in this world.
In his book The Emperor's New Mind, Penrose used the Bekenstein-Hawking formula for black hole entropy to estimate the odds at the Big Bang. He calculated that the likelihood of the universe coming into existence in a way that would develop and support life as we know it is 1 in 10 to the power of 10123. This suggests that our universe did not arise from a random chance or process but through extraordinary fine-tuning by the divine Creator!
The fundamental constants of physics like gravitational constant, vacuum speed of light, Planck's constant, Boltzmann's constant, electric constant, elementary charge, and fine-structure constant, etc. must be fine-tuned for the life to exist in the universe. If these constants were even slightly different, the universe would be unable to support life.
For example, if the gravitational constant were smaller than it is now, the force of gravity would be weaker. This reduced gravitational pull would make it impossible for matter to coalesce into stars, galaxies, and planets, including Earth we live on today. If Planck's constant were larger than it is now, several fundamental changes in the physical universe would occur. Firstly, the intensity of solar radiation would decrease, leading to less energy reaching the Earth from the Sun. This reduction in energy would impact many natural processes, including climate and weather patterns. Additionally, larger Planck's constant values would increase the size of atoms, as the quantization of atomic energy levels would change. This increase would weaken the bonding strength of atoms and molecules, making chemical reactions less stable. Photosynthesis in plants, which relies on the precise absorption of light energy to convert carbon dioxide and water into glucose, would become less efficient. The overall biochemical and physical processes that depend on the current balance of quantum mechanics would be altered, resulting in a dramatically different and less stable environment for life.
Among the fundamental constants, the fine-structure constant has attracted special attention to physicists. The fine-structure constant, denoted by Greek letter quantifies the strength of the electromagnetic interaction between elementary charged particles.
It is a dimensionless quantity with an approximate value of 1/137, a figure that has intrigued physicists since its discovery. Its precise value is crucial to the stability of the universe and the existence of life. If it were even slightly different from its current value, life as we know it would not exist.
If were greater than 1/137, the electromagnetic interaction between particles would become stronger. This would result in electrons being more tightly bound to the nucleus, reducing the size of atoms and making the formation of heavy elements easier, while light elements such as hydrogen would be less likely to form. Since hydrogen is a crucial raw material for nuclear fusion, this change would directly affect the survival of life by limiting the availability of hydrogen needed for energy production in the Sun and stars. Conversely, if
were smaller than 1/137, the electromagnetic interaction between particles would become weaker. Electrons would be less tightly bound to the nucleus, leading to unstable atoms and molecules. Such instability would cause atoms and molecules to decay more easily, preventing the formation of complex molecules like DNA and proteins, which are essential for life. Thus, any significant change in the fine-structure constant would have profound implications for the formation of matter and the potential for life in the universe.
We do not know the origin of its numerical value ≈ 1/137. Dirac considered the origin of
to be "the most fundamental unsolved problem of physics". Feynman described
as a “God's Number” or "magic number" that shapes the universe, and that comes to us without understanding. You might say the "hand of God" wrote that number, and "we don't know how He pushed His pencil."
If we rewrite the equation of , it can represent several ratios: the velocity of electrons to the speed of light (i.e., light travels 137 times faster than the electrons), electrostatic repulsion to the energy of a single photon, and the classical electron radius to the reduced Compton wavelength of the electron. Additionally, the ratio of the strengths of the electromagnetic force to the gravitational force is 1036, and the ratio of the electromagnetic force to the strong force is 1/137. Thus, the numerical value of the dimensionless constant
could serve as a reference point for the four fundamental forces.
The fine-tuned universe reflects the intricate balance and precision underlying the universe’s existence. From the precise calibration of fundamental constants to the seamless interplay of physical laws that make life possible, the cosmos reveals an extraordinary order that inspires awe and curiosity. This remarkable precision raises profound questions about the universe’s origins and purpose, inviting both scientific inquiry and philosophical reflection. The concept of divine design provides a compelling perspective on the extraordinary harmony sustaining all things, encouraging us to marvel at the universe and contemplate our unique place within it.
If individuals who simply discovered the fundamental principles of the universe—such as gravity, relativity, the uncertainty principle, Pauli's exclusion principle, and the Higgs mechanism—are celebrated as geniuses and awarded Nobel Prizes, how much greater must God be, the Creator who designed these laws and principles and brought the entire universe into existence?
2. God's Masterpiece: the Earth (How special is Earth in the universe?)
The Earth we live on provides several fine-tuned conditions essential for the survival of living organisms. These conditions are so precise that they often serve as an extension of the fine-tuned universe.
In this context, we will explore ten special conditions of Earth that are particularly unique and crucial for supporting life as we know it. These conditions highlight the extraordinary balance and precision required to sustain living organisms, making our planet an exceptional oasis in the vast expanse of the universe. By examining these unique attributes, we can gain a deeper appreciation for the intricate interplay of factors that enable life to thrive on Earth.
a. Right Distance from the Sun
The presence of liquid water is crucial for life. To have liquid water, a planet must orbit within a specific region around its central star. If the planet is too close to the star, all the water will boil away, and if it is too far, all the water will freeze. The range of orbits where water neither boils nor freezes is called the ‘habitable zone’. The estimated habitable zone in our solar system is between 0.95 AU and 1.15 AU (1 AU is the distance from Earth to the Sun). Thus, if Earth were 5% closer or 15% further away from the Sun, we would not be here.
The percentage of the habitable zone occupying the ecliptic plane stretched to Neptune (30 AU) is only 0.05%. The eccentricity of Earth’s orbit is another important factor affecting the range of the habitable zone. For example, if the eccentricity were larger than 0.5, all water would boil twice a year near perihelion and freeze twice a year near aphelion. Fortunately, Earth’s eccentricity is only 0.017, resulting in an almost circular orbit.
Fig. 2.1. Habitable zone (green) in the solar system
b. The Right Axial Tilt
The rotation axis of the Earth is tilted at about 23.5 degrees. Because of this, we can have four seasons and mild weather. What will happen if the rotation axis is not tilted (0 deg, cf. axial tilt in Mercury = 0.0 degrees) or completely tilted (90 deg, cf. axial tilt in Uranus = 82.2 degrees)?
If Earth's rotation axis were not tilted, several significant changes would occur in terms of climate, seasons, and habitability. The equator would receive constant, direct sunlight year-round, leading to perpetually hot temperatures. Conversely, the poles would always receive minimal sunlight, resulting in perpetual cold. This drastic temperature contrast would significantly affect global climates and weather patterns.
The absence of seasons would have profound impacts on ecosystems and agriculture. Regions near the equator might become too hot for many crops and organisms to thrive, while the polar regions would remain inhospitably cold. The middle latitudes would become the primary habitable zones, but even these areas would lack the seasonal variations that many plants and animals rely on for life cycles and reproduction.
Human societies would face serious challenges, including reduced agricultural productivity and increased pressure on habitable land. The lack of seasonal cues could also disrupt cultural and economic activities that depend on the changing seasons. Overall, a non-tilted Earth would lead to a less dynamic and less hospitable environment for life.
Fig. 2.2. Earth’s axial tilt. No tilt (left) and 90 degrees tilt (right)
If Earth's rotation axis were completely tilted to 90 degrees, it would have profound and dramatic effects on the planet's climate and environment. In this scenario, one hemisphere would experience continuous daylight for half the year while the other would be in constant darkness, and then the situation would reverse for the other half of the year.
Each hemisphere would undergo extreme seasonal variations. During its summer, one hemisphere would receive constant sunlight, leading to prolonged periods of intense heat and potentially desert-like conditions. Conversely, during its winter, the same hemisphere would experience continuous darkness and freezing temperatures.
The drastic changes in light and temperature would severely disrupt ecosystems. Many plants and animals are adapted to the current seasonal cycle, and such extreme changes would threaten their survival.
Agriculture, which relies on predictable seasons, would be significantly affected. Regions currently suitable for farming might become uninhabitable, leading to food shortages and the need for major adaptations in agricultural practices.
Overall, a completely tilted axis would make Earth much less hospitable for life, creating extreme and unstable environmental conditions.
c. The Right Rotation and Orbital Periods
The rotation period of the Earth is 24 hours with about 12 hours day and 12 hours night. Our biorhythm was shaped by the rotation period of the Earth. The 24-hour rotation period provides an optimum time block for 8 hours of work, 8 hours of sleep, and 8 hours of leisure time. However, not all planets in the solar system have an optimum rotation period. For example, the rotation period of Jupiter is about 10 hours whereas Venus is 243 days.
If Earth's rotation period were shortened to 10 hours, it would significantly impact the planet's environment and life. A faster rotation would result in shorter days and nights, causing a rapid alternation between daylight and darkness. This could disrupt the circadian rhythms of many organisms, affecting sleep patterns, feeding behaviors, and reproduction cycles.
The increased rotational speed would also lead to stronger Coriolis effects, intensifying weather patterns and potentially causing more severe storms and hurricanes. The faster rotation could also impact the Earth's tectonic activity. The increased centrifugal force might lead to more frequent and intense earthquakes and volcanic eruptions.
On the other hand, if the Earth's rotation period were 243 days as in Venus, the consequences for the planet and its inhabitants would be drastic. Such a slow rotation would mean extremely long days and nights, each lasting about 120 days.
The side facing the Sun would experience prolonged heating, leading to scorching temperatures, while the side facing away would endure extended darkness and severe cooling, potentially freezing over. These temperature extremes would make it challenging for most forms of life to survive. The prolonged heating and cooling periods would disrupt atmospheric circulation, likely causing extreme weather patterns. Hurricanes, massive storms, and prolonged droughts or floods could become common.
The long periods of daylight and darkness would severely disrupt plant and animal life cycles, affecting photosynthesis, reproduction, and feeding patterns.
Human activities, agriculture, and infrastructure would need significant adaptation to cope with the harsh and varying conditions, posing a tremendous challenge to survival and daily living.
The orbital period of the Earth is also important for human survival. The orbital period of the Earth is 365 days with 3 months each for spring, summer, autumn, and winter. The length of each season is well-balanced, ensuring that no season is too short or too long. This balance is crucial for agricultural cycles, plant growth, the timing of animal migrations, and other ecological processes.
What happens if the Earth has a short orbital period like 88 days, similar to Mercury? In this scenario, each season would last only about 3 weeks. Most crops on Earth require 6 to 9 months from sowing in spring to harvesting in fall. However, with seasons changing every 3 weeks, crops would not have enough time to mature, leading to serious food shortages and directly impacting human survival.
Conversely, what happens if the Earth has a long orbital period like 164 years, similar to Neptune? Each season would last about 40 years. Prolonged summers would lead to extended heat waves and potential desertification, while extended winters would cause long periods of cold and ice, impacting agriculture and ecosystems. While humans might adapt to avoid food shortages, wild animals would struggle to find food during a 40-year-long winter. The prolonged harsh conditions would make it nearly impossible for most wildlife to survive, leading to widespread extinction.
d. The Right Size
You may not have thought about it, but the size of the Earth is crucial for the survival of human beings. The planet's size affects its gravitational pull, which in turn influences everything from the retention of a life-sustaining atmosphere to the ability to support stable bodies of water and maintain a protective magnetic field.
If Earth were half its current size, the gravity would reduce to half of the current gravity. The reduced gravity would have significant and potentially devastating impacts on the planet's ability to support life. The reduced gravity might not be strong enough to retain a dense atmosphere. This thinner atmosphere would offer less protection from harmful solar radiation and meteoroids and might not support the stable weather patterns necessary for life.
The reduced gravity would also affect the retention of liquid water, leading to increased evaporation rates and potentially a loss of surface water over time. This would make it difficult to sustain oceans, rivers, and lakes, which are crucial for supporting diverse ecosystems and human civilization.
Additionally, a smaller Earth would have a diminished magnetic field, offering less protection from the solar wind. This could strip away the atmosphere and further expose the surface to harmful cosmic and solar radiation, making the planet much less hospitable for human beings and other forms of life.
If the Earth were twice its current size, the effects on gravity and escape velocity would be significant and have profound implications for life on the planet. The gravity would increase, making everything on Earth feel heavier, and the escape velocity would also double. This heightened gravity would make movement more strenuous for humans and other organisms, potentially leading to greater physical stress and adaptations over time.
The combination of increased gravity and escape velocity would also impact on the atmosphere. A stronger gravitational pull would retain more gases, including toxic ones like methane and ammonia, similar to the atmospheres of Saturn and Jupiter. These gases could accumulate to harmful levels, creating a toxic environment unsuitable for most life forms.
Additionally, increased gravity could affect geological processes, leading to more intense volcanic activity and higher mountains. Overall, a larger Earth with increased gravity and escape velocity would present significant challenges for the survival of life, potentially resulting in a more hostile and unstable environment.
Fig. 2.3. Comparison of the sizes of the planets in the solar system
e. The Existence of Magnetosphere
Earth is surrounded by a system of magnetic fields known as the magnetosphere, which shields the planet from harmful solar and cosmic radiation. This protective shield is crucial for maintaining life on Earth. To have a magnetosphere, two factors are essential: the proper rotation speed and the existence of a metallic liquid outer core. Fortunately, Earth possesses both. The planet’s rotation induces fluid motions (convection) within the liquid outer core, generating strong magnetic fields that form the magnetosphere.
What would happen if we didn’t have a magnetosphere? If Earth didn’t have a magnetosphere, the consequences for living organisms and the atmosphere would be severe. Without this protective shield, harmful solar and cosmic radiation would bombard the planet, significantly increasing the risk of cancer and genetic mutations in living organisms. Additionally, the magnetosphere helps prevent atmospheric loss by deflecting charged particles from the solar wind. Without it, these particles would strip away the atmosphere over time by sputtering process, depleting essential gases like oxygen and nitrogen. This atmospheric erosion would lead to a thinner atmosphere, reduced surface pressure, and extreme temperature variations, making Earth less hospitable for life.
The strength of the magnetic field on Mars is about 0.01% of that of the Earth. Because of a weak magnetic field, the global magnetosphere could not be formed on Mars and as a result most of the air was removed by sputtering process.
Fig. 2.4. Earth’s magnetosphere deflects harmful cosmic rays
The field lines of the magnetosphere converge at the poles near Arctic and Antarctic, causing a natural weakening of the magnetic field strength. This can result in increased exposure to solar radiation in these areas. The high energy charged particles ionize and excite atoms in the upper atmosphere and produce colorful aurora borealis (northern lights) and aurora australis (southern lights)
f. The Existence of an Exceptionally Large Moon
Earth has an exceptionally large Moon compared to other planets. Among the terrestrial planets, only Earth and Mars possess moons. Mars has two small moons, Phobos and Deimos, named after twin characters from Greek mythology, with diameters of 22.2 km and 12.6 km, respectively. In stark contrast, Earth's Moon has a diameter of 3,475 km, making it vastly larger than the moons of Mars.
The existence of a large Moon plays two important roles in supporting human survival: i) stabilizing the rotation axis of the Earth and ii) maintaining marine ecosystems.
Without the Moon, the largest gravitational forces acting on Earth would be from the Sun and Jupiter. As the Earth orbits the Sun, varying degrees of gravitational force from the Sun and Jupiter would destabilize Earth's rotation axis. If the rotation axis of the Earth wobbled significantly, we would experience serious climate changes, as discussed in the previous section.
In fact, over the past 6 million years, Mars has experienced substantial changes in its rotation axis and eccentricity approximately every 150,000 years due to the absence of a stabilizing large moon. During this period, the rotation axis has varied between 15 and 45 degrees, while the eccentricity has changed between 0 and 0.11.
Fig. 2.5. Rotation axis and eccentricity changes in Mars
Ocean tides are mainly caused by the gravitational force of the Moon. Tides provide oxygen to floating plankton and distribute them over wide areas, where they are consumed by small fish. Tides also mix nutrient-rich freshwater with saltwater, delivering these nutrients to plankton and small fish. Without tides, nutrient-rich freshwater would not mix with saltwater, leading to uncontrollable algal blooms. If the algae contain toxins, these blooms will produce red tides or harmful algal blooms (HABs), which can kill fish, sea birds, mammals, and even humans. Even if the algae are non-toxic, they consume all the oxygen in the water as they decay, clogging the gills of fish and other marine life. If there were no Moon, the marine ecosystem would have been destroyed long ago. Additionally, we would not have seafood, including lobster, shrimp, and sushi.
However, even if the Earth had a Moon that was smaller or larger than its current size, or if its location were farther away or closer than its current position, we might still face similar problems.
Fig. 2.6. Red tide
g. The Existence of Jupiter, the Guardian of the Earth
Jupiter is the largest planet in the solar system, 11.2 times larger and 318 times heavier than Earth. The presence of Jupiter is important for our survival. Earth is constantly bombarded by meteorites (mostly shattered asteroids and fragments of comets). The frequency of meteorite falls is one meter size once every hour, a few meters size once a day, a few meter to 10-meter size once a year, a few ten meter size every decade, and a few ten meter size to 100-meter size once every century.
When meteorites less than 10 meters enter the atmosphere, most of them burn up due to atmospheric friction and compression. However, if it is larger than 10 meters, disastrous events can happen. In 1908, about 55-meter size meteorite exploded at an altitude of 5 to 10 km in the Tunguska region and flattened about 80 million trees over an area of 2,150 km2. This Tunguska event is the largest impact event on Earth in recorded history.
Fig. 2.7. Size and frequency of meteorites falling on Earth
Fig. 2.8. Trees toppled by a meteorite that fell on Tunguska
Jupiter is vital because it acts as a cosmic vacuum cleaner, capturing meteorites and comets that might otherwise impact Earth and cause catastrophic events like the Tunguska event. Simulations indicate that Jupiter is about 5,000 times more effective at capturing comets than Earth. A notable demonstration of this occurred in 1994 when Jupiter captured the fragmented comet Shoemaker-Levy 9, which had an estimated size of about 1.8 km. If this comet had hit Earth instead, it could have sent dust and debris into the atmosphere, blocking sunlight. This blockage could last long enough to kill all plant life, leading to the extinction of people and animals that dependent on plants for survival.
Fig. 2.9. Fragmented Shoemaker-Levy 9 and its impact on Jupiter
h. The Existence of Plate Tectonics
Plate tectonics is the theory that describes the large-scale motion of Earth's lithosphere, which was broken into several large tectonic plates by mantle’s convective motions. This theory explains many geological phenomena, including the movement of continents, the formation of mountains, earthquakes, and volcanic activity.
Fig. 2.10. The plates that make up the Earth's crust
Plate tectonics plays a crucial role in various aspects of Earth's systems that directly and indirectly impact human survival. One of the most important aspects of plate tectonics is the automatic regulation of Earth’s climate via carbon cycle.
The Earth’s climate is mainly determined by incoming solar radiation, albedo of Earth’s surface, and composition of the atmosphere. Among them, incoming solar radiation is almost constant for a long time. The albedo is a ratio of the incoming radiation to reflected radiation. A significant fraction of the reflected radiation from Earth’s surface will be absorbed by carbon dioxide (CO2) molecules in the atmosphere. The absorbed radiation heats the CO2 molecules and re-radiates it in all directions, with about half of it returning to Earth as heat. This trapped heat energy increases the average global surface temperature, which is known as the greenhouse effect.
The carbon cycle is the process by which carbon is exchanged among the atmosphere, oceans, soil, minerals, rocks, plants, and animals, crucial for regulating Earth's climate. Carbon enters the atmosphere as CO2 from respiration, combustion, and volcanic eruptions. Plants absorb CO2 during photosynthesis, converting it into organic matter, which is consumed by animals and released back into the atmosphere through respiration and decomposition. In the oceans, CO2 is dissolved and utilized by marine organisms to form calcium carbonate (CaCO3) shells. When these organisms die, their shells accumulate on the ocean floor, forming sedimentary rock.
Weathering of rocks on land also absorbs CO₂, forming carbonates that are washed into the oceans. This weathering process depends on the temperature. If there is too much CO2 in the atmosphere and increase temperature by greenhouse effect, then weathering process increases and absorbs more CO2. If CO2 in the atmosphere was removed, then the temperature of the Earth will decrease. If the temperature of the Earth decreases, the weathering process decreases and less CO2 is removed from the atmosphere. If that happens, then the accumulated CO2 produces more greenhouse effect and increases the temperature. This process is called ‘carbon dioxide rock weathering cycle’. Over geological timescales, tectonic activity can push these carbon-rich rocks into the Earth's mantle through subduction. The carbon is then released back into the atmosphere via volcanic eruptions, completing the cycle. The temperature-dependent carbon dioxide rock weathering cycle automatically regulates Earth's temperature over geological timescales. The figure below shows how this cycle has worked over the past 800,000 years: when the amount of carbon dioxide increases, the Earth's temperature increases, and when carbon dioxide decreases, the Earth's temperature decreases.
Fig. 2.11. Correlation between CO2 and temperature
However, the carbon dioxide rock weathering cycle does not work if there is no plate tectonics. In such a case, the accumulated CO₂will not be recycled and therefore, the greenhouse effect reduces. If there is no greenhouse effect, then the temperature of the Earth will rapidly decrease, and all waters will be frozen. If all waters are frozen, incoming solar energy will be reflected due to large albedo and eventually the Earth will enter an irreversible ice age.
Fig. 2.12. Carbon dioxide is recycled by plate tectonics
Recent research on plate tectonics suggests that if the Earth were 20% larger or smaller than it is today, if the Earth's crust contained slightly more metals such as iron and nickel, or if the crust were thicker, plate tectonics would not have functioned as they do now.
Overall, plate tectonics is a fundamental process that supports life by maintaining the Earth’s geological and environmental stability.
i. The Right Size of the Sun
The size of habitable zone (HZ) of a planet varies depending on the size and type of its central star.
For small stars, such as red dwarfs, the HZ is close to the star because the star emits less light and heat. This makes the range of HZ narrower than that around the Sun. Due to its proximity, a planet in the habitable zone of a red dwarf could become tidally locked, just like our Moon is to Earth. If that happens, the planet would be unable to generate a magnetic field and form a magnetosphere due to its slow rotation. Without a magnetosphere, harmful radiation from the star could freely reach the planet’s surface, damaging cells and DNA. Additionally, the day side would experience constant daylight and extreme heat, while the night side would remain in perpetual darkness and extreme cold.
For large stars, such as blue or red giants, the HZ is much farther away from the star. However, planets in these zones face significant challenges. Giant stars evolve rapidly due to their high mass, quickly burning through their hydrogen, expanding into red supergiants, and undergoing multiple stages of fusion until forming an iron core. This core eventually collapses, resulting in a supernova explosion and leaving behind either a neutron star or a black hole. The typical lifespan of giant stars is only a few million years, meaning that before the star explodes into a supernova, any inhabitants of a planet in its HZ would need to find another suitable planet to migrate to for their survival. Additionally, giant stars emit high levels of ultraviolet and X-ray radiation, which can be harmful to DNA and cells, making the surface environments of planets within HZ less hospitable for life. Furthermore, giant stars can exhibit significant variability in their energy output, leading to unstable climates on orbiting planets. This instability can cause extreme temperature fluctuations, making it difficult for life to survive.
Fig. 2.13. Changes in habitable zones with star size
The habitable zones (HZ) around Sun-like stars offer many advantages. These stars have relatively stable energy output over long periods, providing consistent light and heat to planets in their habitable zones. This stability supports the development of stable climates and ecosystems. The habitable zone around Sun-like stars is at a moderate distance, neither too close nor too far from the star. The light spectrum from Sun-like stars is ideal for photosynthesis, allowing plants and other photosynthetic organisms to efficiently convert sunlight into energy, forming the base of a sustainable food chain. Additionally, Sun-like stars generally have lower levels of harmful stellar activity compared to smaller stars like red dwarfs. Fewer flares and less intense magnetic activity mean that planets in the habitable zone are less exposed to potentially damaging radiation and atmospheric stripping.
The fraction of Sun-like stars is only a few percent, as most stars are smaller and lighter than the Sun. The Sun is a single star, but about 50% to 60% of stars are binary or multiple star systems. The habitable zone in multiple star systems is much more restricted due to complex orbits, variable illumination, gravitational perturbations, and potential radiation levels.
Fig. 2.14. Mass distribution of stars
Fig. 2.15. Circumbinary orbit (top) and circumprimary or circumsecondary orbit (bottom) in binary systems
j. The Right Distance from the Center of the Galaxy
Just like HZ in our solar system, there exists a Galactic Habitable Zone (GHZ) within a galaxy where conditions are most favorable for life. The required conditions for the GHZ include metallicity, stellar density, radiation levels, and orbital environments.
The GHZ needs to have an optimal concentration of heavy elements (elements heavier than helium) necessary for the formation of terrestrial planets and organic molecules. While metal elements are more abundant in the galactic center, this area cannot be considered a favorable zone for GHZ due to its high stellar density, which causes frequent supernova explosions, gamma-ray bursts (GRBs), and other high-energy events.
A gamma-ray burst occurring within 10,000 light-years of Earth would likely have devastating effects on the planet's atmosphere, climate, and biosphere. Immediate effects would include increased UV radiation due to approximately 40% destruction of the ozone layer, while long-term effects could involve significant climate changes and mass extinctions. Such an event would pose a severe threat to human civilization and the natural world. The destruction of 40% of the ozone layer would allow increased UV radiation to damage DNA 16 times more. Phytoplankton, the foundation of the marine food web, is particularly sensitive to UV radiation. Increased UV exposure can inhibit their growth and reproduction, leading to a decline in phytoplankton populations. Phytoplankton plays a crucial role in the carbon cycle by absorbing CO2 during photosynthesis. A decline in phytoplankton would reduce this carbon sequestration, potentially exacerbating the accumulation of CO2 in the atmosphere and enhancing the greenhouse effect.
There is some evidence that past mass extinction events on Earth could have been triggered by nearby GRBs. For instance, the Ordovician-Silurian extinction event around 450 million years ago was hypothesized by some scientists to have been influenced by a GRB that occurred 6,000 light-years away from Earth.
Fig. 2.16. Phytoplankton
Another problem encountered in the Galactic center is frequent close encounters with other stars. These close encounters cause significant gravitational perturbations that can destabilize the orbits and rotation axes of planets within planetary systems. Such perturbations can lead to orbital crossings, collisions, or ejections from the system. The gravitational influence of nearby stars could also disturb the orbits of objects in the Oort Cloud and Kuiper Belt, sending a higher number of comets and asteroids into the inner solar system. This would increase the likelihood of impacts on planets, including Earth.
The outskirts of the Galaxy have a low stellar density and do not have these problems, but there is one crucial issue: the low supernova explosion rate. This results in an interstellar medium that lacks sufficient metal elements for the formation of terrestrial planets, making the outskirts of the Galaxy unfavorable for GHZ.
The favorable region for the GHZ is where there are sufficient heavy elements for planet formation, fewer supernovae and other hazardous events for safe environments for life, and less crowded areas for stable planetary orbits. Additionally, there exists a region where the stars’ orbital velocity matches the pattern speed of the spiral arms of the Galaxy, known as the corotation radius. Within the corotation radius, stars and their planetary systems experience fewer disruptive gravitational interactions with spiral arms, enhancing the likelihood of sustained habitable conditions.
Considering all these conditions, GHZ lies between 23,000 and 29,000 light-years from the center of the Galaxy. Coincidentally, our solar system is 26,000 light-years away from the center of the Galaxy and lies in the center of GHZ.
Fig. 2.17. Galactic Habitable Zones in Galaxy
In this chapter, we explored ten unique and extraordinary conditions that make Earth an exceptional planet. These conditions are so intricately balanced and precisely calibrated that the likelihood of them occurring by random chance is astronomically low. The exactness required for Earth's distance from the Sun, its axial tilt, rotational period, magnetic field, atmosphere, and other critical factors creates an environment that is uniquely capable of supporting life. Such a combination of favorable conditions occurring simultaneously elsewhere in the universe would be highly improbable, further highlighting Earth's distinctiveness. Additionally, the protection and stability Earth enjoys—shielding from harmful cosmic events and maintaining a delicate ecological balance—underscore its singularity among other planets. Together, these factors strongly support the notion that Earth was intentionally designed to serve as a habitat for life by the divine Creator. This fine-tuned balance of conditions is not merely a coincidence but instead suggests a purposeful and intelligent design, making Earth an extraordinary and uniquely suited environment for sustaining life.
3. Creation or Evolution?
Are we created or evolved? The debate over the origin of life is still ongoing, but the current education system teaches evolution as the established theory regarding the origin of life, while considering creationism as an unscientific claim.
The theory of evolution starts with the hypothesis of abiogenesis to explain the origin of life. We will first delve into this issue in detail and then explore whether Darwin's theory should be referred to as the “theory of evolution” or the “theory of genetic adaptation”. We will also address the question of whether humans evolved from apes. Additionally, we will introduce intelligent design and examine creationism through the lens of particle physics, the existence of extraterrestrial life, animal instincts, and the mathematics found in nature.
a. The Origin of Life
The scientific hypothesis for the origin of life on Earth begins with the spontaneous formation of amino acids from carbon-bearing atoms (abiogenesis) in the primordial soup of early Earth. These amino acids link together through peptide bonds to form proteins, which perform a variety of essential functions within cells, such as catalyzing biochemical reactions and providing structural support. Over time, nucleic acids like RNA and DNA emerged, allowing for the storage and transmission of genetic information. The interaction between proteins and nucleic acids facilitated the development of simple prokaryotic cells, which eventually gave rise to more complex eukaryotic cells. These eukaryotic cells then evolved into multicellular organisms, with cell differentiation leading to the development of specialized tissues and organs. This journey come to an end with the diverse and complex life forms we see today.
Let’s examine whether these processes could have occurred spontaneously. We will explore the following topics: i) formation of amino acids, ii) formation of RNA, iii) formation of proteins, iv) formation of DNA, v) formation of cells, vi) formation of eukaryotic cells, vii) organelle localization, viii) cell differentiation, ix) formation of tissues and organs, x) formation of multicellular organism.
i. The Formation of Amino Acids
The formation of amino acids under the conditions of prebiotic early Earth is a crucial topic in understanding the origin of life. The Miller-Urey experiment conducted in 1952 was a representative study that simulated the conditions of early Earth's atmosphere to investigate the formation of amino acids. Using a mixture of gases thought to resemble the primitive atmosphere (methane, ammonia, hydrogen, and water vapor) and applying electrical sparks to mimic lightning, they synthesized several amino acids, including glycine and alanine.
This experiment demonstrated that organic molecules essential for life could be formed from simple inorganic compounds under prebiotic conditions, providing significant support for the hypothesis that life on Earth could have originated through natural chemical processes. The Miller-Urey experiment did synthesize some amino acids, but it faces several issues that are important to consider.
Fig. 3.1. Diagram of Miller-Urey experiment
The Miller-Urey experiment used an electric discharge device to mimic natural lightning, but their device and natural lightning differ significantly in many aspects. Their device uses a voltage of 50,000 volts and generates 250 degrees of heat, whereas the voltage of lightning is 100 million volts and generates 50,000 degrees of heat. The electrical discharges in the Miller-Urey experiment were relatively continuous and could be sustained for extended periods, ensuring a consistent energy input for the chemical reactions. In contrast, lightning does not occur continuously but rather sporadically, and its duration is extremely brief, lasting only a few microseconds to milliseconds.
Comets are remnants of the early solar system and contain primordial building material that has remained relatively unchanged. The composition of comets can provide valuable insights into the composition of the early Earth's atmosphere. The main composition of comets is water (86%), carbon dioxide (10%), and carbon monoxide (2.6%). Ammonia and methane occupy less than 1% each. This result suggests that the gas used in the Miller-Urey experiment does not accurately represent the early Earth's atmosphere since it does not contain the most abundant gas carbon dioxide and second most abundant gas carbon monoxide. Furthermore, carbon dioxide is an oxidizing agent, inhibiting the formation of amino acids.
Composition | Ratio (%) | Reference |
---|---|---|
Water (H2O) | 100 (86%) | Pinto et al. (2022) |
Carbon dioxide (CO2) | 12 (10%) | Pinto et al. (2022) |
Carbon monoxide (CO) | 3 (2.6%) | Pinto et al. (2022) |
Ammonia (NH3) | 0.8 (0.7%) | Russo et al. (2016) |
Methane (CH4) | 0.7 (0.6%) | Mumma et al. (1996) |
Table 3.1. Composition of comets (water=100)
The Miller-Urey experiment assumed that the early Earth's prebiotic atmosphere was a reducing atmosphere. However, if it were an oxidizing atmosphere, it would hinder the formation of amino acids by breaking down or oxidizing organic molecules. The conditions of the early Earth's atmosphere are a subject of ongoing scientific inquiry and debate. Urey (1952), Miller (1953), and Chyba & Sagan (1997) argue for a reducing atmosphere, whereas Albeson (1966), Pinto et al. (1980), Zahnle (1986) and Trail et al. (2011) argue for an oxidizing atmosphere.
Trail et al. (2011) paper published in Nature is noteworthy to mention. They analyzed the oxidation state of zircon crystals from the Hadean era using the ratio of cerium (Ce) oxidation states. The analysis indicated that the Hadean magmas were more oxidized than previously thought, with conditions like those of modern volcanic gases. The more oxidized state of Hadean magmas implies that volcanic outgassing would have released less hydrogen (H2) and more water vapor (H2O), carbon dioxide (CO2), and sulfur dioxide (SO2). They concluded that the early Earth's atmosphere was likely less reducing and more oxidizing than traditionally thought. Their findings raised questions about the validity of the Miller-Urey experiment, suggesting that it might not be possible to form amino acids via abiogenesis in prebiotic early Earth.
The amino acids produced in the experiment were collected and preserved under laboratory conditions. In the harsh and variable conditions of early Earth, these compounds might have been less stable and more prone to degradation. The concentration of organic molecules in the experiment was controlled and maintained at relatively high levels. On the early Earth, these molecules might have been highly diluted in vast oceans or subjected to rapid dispersion, potentially reducing the chances of further chemical evolution.
Another key problem is chirality. The amino acids produced were racemic, meaning they contained equal amounts of left- and right-handed isomers. Life on Earth uses primarily left-handed amino acids (99.3%), and the origin of this homochirality remains unexplained by the Miller-Urey experiment.
ii. The Formation of RNA
All living organisms are composed of 20 different amino acids. To continue our discussion, let’s assume that these 20 amino acids were formed spontaneously. The next step toward life would be the formation of RNA, proteins, and DNA. So far, there are no confirmed theories regarding the spontaneous formation of these molecules. Scientists suggest RNA appeared first, as it is thought to be one of the earliest molecules capable of storing genetic information and catalyzing chemical reactions. This dual functionality is central to the ‘RNA world hypothesis,’ which proposes that life began with RNA molecules before the formation of DNA and proteins. While the RNA world hypothesis provides a compelling framework, it faces several significant challenges: (i) RNA is too complex a molecule to have arisen prebiotically, (ii) RNA is inherently unstable, (iii) catalysis is a property exhibited by only a relatively small subset of long RNA sequences, and (iv) the catalytic repertoire of RNA is too limited. Let us begin by examining the first challenge.
RNA nucleotides are composed of three components: nitrogenous bases (adenine, guanine, cytosine, and uracil), ribose sugar, and phosphate groups. For RNA to form, these components must have spontaneously arisen under prebiotic conditions. Let us examine the feasibility of this process.
- Formation of Nitrogenous Bases
Nitrogenous bases are complex molecules with intricate ring structures. The spontaneous assembly of these molecules from simpler prebiotic compounds is highly improbable since it requires specific chemical reactions, specific reaction conditions, and catalysts to form the ring structures. These incudes amination reactions, where an amine group (NH2) is added to a carbon backbone, require nitrogen compounds like ammonia and aldehydes or ketones, often facilitated by catalysts or high temperatures. Deoxygenation reactions, which remove oxygen atoms, need reducing agents such as hydrogen or methane gases. Ring formation, crucial for creating the nitrogenous base structure, typically occurs in multi-step processes under high-temperature and high-pressure conditions, often catalyzed by metal ions. Finally, the addition of nitrogenous bases may require high-energy environments and specific precursor compounds to complete the process.
The early Earth's environment is thought to have varied greatly in terms of temperature, pH, and available chemical compounds. Creating the precise conditions necessary for the synthesis of nitrogenous bases would have been extremely challenging. For example, the high-energy conditions needed to form these bases might not have been consistently present or sustained. Even under optimized laboratory conditions, the yields of nitrogenous bases are often low. This raises questions about whether sufficient quantities of these bases could have been produced naturally to support the formation of RNA or other nucleic acids. The pathways leading to the synthesis of nitrogenous bases involve multiple steps and intermediate compounds. The likelihood of all necessary conditions and compounds being present simultaneously and in the correct proportions is questionable.
The formation of nitrogenous bases typically requires catalysts to drive the chemical reactions. In a prebiotic world, the presence of such catalysts in the right concentrations and conditions is uncertain. Without these catalysts, the reaction rates would be too slow to be significant. Even if nitrogenous bases could form spontaneously, their stability in a prebiotic environment is questionable. These molecules are prone to degradation under UV radiation, hydrolysis, and other environmental factors. This instability would hinder their accumulation and subsequent use in forming RNA.
- Formation of Ribose Sugar
The formose reaction, which involves the polymerization of formaldehyde in the presence of a catalyst, can produce ribose. This reaction lacks specificity, leading to a low yield of ribose relative to other sugars. It also requires specific conditions, such as the presence of calcium hydroxide as a catalyst, which may not have been universally available or stable in prebiotic environments. For ribose to be useful in the prebiotic synthesis of RNA, it would need to be selectively synthesized and stabilized. However, the formose reaction does not favor the selective formation of ribose, and the resultant mixture of sugars complicates the utilization of ribose for RNA synthesis. Mechanisms to stabilize ribose or select it from a complex mixture would have needed to be present. Potential stabilizing agents, such as borate minerals, have been proposed, but their availability and efficacy in prebiotic conditions are uncertain.
The formose reaction requires formaldehyde, which must be present in sufficient concentration. The production and stability of formaldehyde under prebiotic conditions are not possible since the formaldehyde can readily polymerize or react with other compounds. The specific environmental conditions necessary for the formose reaction to proceed efficiently and produce ribose (e.g., optimal pH, temperature, presence of catalysts) may not have been prevalent or stable on the early Earth. Even under controlled laboratory conditions, the yield of ribose is low, and the reaction produces a complex mixture of sugars, highlighting the challenge of isolating ribose in a prebiotic setting.
Ribose is a pentose sugar that is chemically unstable and prone to rapid degradation, particularly under the conditions thought to be prevalent on early Earth. The instability arises from the fact that ribose is easily hydrolyzed in aqueous solutions and can degrade through processes like the Maillard reaction and caramelization. In addition, studies have shown that ribose has a short half-life, especially in alkaline conditions, making it unlikely to accumulate in significant amounts over geological timescales.
- Formation of Phosphate Group
The formation of phosphate groups in prebiotic conditions faces challenges because readily available sources of phosphate were relatively scarce on early Earth. Phosphate is usually found in minerals like apatite, which are not highly soluble in water, making it difficult for phosphate to be freely available in aqueous environments where prebiotic chemistry is thought to have occurred. Phosphate minerals tend to be chemically inert under neutral pH conditions. This low reactivity poses a significant barrier to the incorporation of phosphate into organic molecules necessary for life.
The formation of phosphate esters, which are critical for nucleotide synthesis, requires significant energy input. In prebiotic conditions, the necessary energy sources and catalytic processes to overcome these barriers would have been limited. Some studies have shown that high-energy conditions, such as those created by lightning strikes or volcanic activity, can facilitate the formation of phosphate-containing molecules. However, these scenarios require specific and transient conditions that may not have been widespread.
The formation of polyphosphates, which are chains of phosphate groups, typically requires specific conditions, such as high temperatures or the presence of catalysts that might not have been readily available in prebiotic environments. Polyphosphates are prone to hydrolysis, breaking down into simpler phosphate compounds. The stability of these compounds in the fluctuating conditions of early Earth is questionable.
While some experiments have demonstrated the formation of phosphate-containing molecules under simulated prebiotic conditions, these often require highly specific and controlled conditions that may not realistically reflect the environments of early Earth. In addition, the yields of phosphate-containing molecules in prebiotic synthesis experiments are generally low, raising doubts about the efficiency and plausibility of these processes occurring on a prebiotic Earth at scales sufficient to drive the origin of life.
- Formation of Functional RNA Nucleotides
Even if all challenges were overcome and nitrogenous bases, ribose sugar, and phosphate groups were successfully created, another significant hurdle remains: the formation of functional RNA nucleotides.
There exist many types of RNAs: RNAs involved in protein synthesis (mRNA, rRNA, tRNA, etc.), RNAs involved in post-transcriptional modification (snRNA, snoRNA, etc.), regulatory RNAs (aRNA, miRNA, etc.), and Parasitic RNAs. The number of nucleotides in RNA molecules depends on its type. Some examples are:
- mRNA & rRNA - hundreds to thousands
- tRNA – 70 to 90
- snRNA – 100 to 300
- miRNA – 20 to 25.
Let’s assume that the typical RNA molecule, for which we want to estimate the probability of formation, is 100 nucleotides long. In that case, each position in the RNA sequence can be occupied by one of four bases: adenine, uracil, cytosine, or guanine. The total number of possible sequences of length 100 nucleotides is 4100 (=1.6x1060) and the probability of forming functional RNA is 1/1.6x1060 = 6.2x10-61. This extremely small probability suggests that the functional RNA cannot form spontaneously, even in the presence of pre-existing nitrogenous bases, ribose sugar, and phosphate groups.
iii. The Formation of Proteins
The formation of proteins involves the synthesis of amino acids, their polymerization into peptides, and the folding of these peptides into functional proteins. Let’s examine problems and challenges in these processes under prebiotic conditions.
Proteins are composed of long chains of amino acids, called polypeptide chains, arranged in highly specific sequences. The number of amino acids in a single protein can range from several dozen to several thousand. For instance, the small protein insulin contains about 51 amino acids, the medium-sized protein myoglobin has about 153 amino acids, the large protein hemoglobin has about 574 amino acids, and the giant protein titin contains about 34,350 amino acids. It is almost impossible to form long peptide chains through a random process from a combination of 20 types of amino acids. For example, the probability of forming polypeptide chain in small protein insulin through random process is 1/2051 = 4.4x10-67 ≈ 0.
Even if the polypeptide chains were somehow formed, they must fold into specific three-dimensional structures to be functional protein. The folding process of a polypeptide chain into a functional protein involves several key steps, each driven by various chemical interactions and assisted by molecular machinery within the cell.
Sections of the polypeptide chain (primary structure) fold into secondary structures known as alpha helices and beta sheets. These structures are stabilized by hydrogen bonds between the backbone atoms of the polypeptide chain. Additional secondary structures, such as turns and loops, connect the helices and sheets, contributing to the overall fold of the protein. The secondary structures fold further into a specific three-dimensional shape, known as the tertiary structure. This process is driven by hydrophobic interactions, where nonpolar side chains cluster away from the aqueous environment, driving the polypeptide to fold into a compact, globular form; hydrogen bonds, which form between polar side chains and the backbone, stabilizing the folded structure; ionic bonds, with electrostatic interactions between oppositely charged side chains contributing to the protein’s stability; and disulfide bonds, where covalent bonds between cysteine residues provide additional stability to the structure.
For some proteins with multiple polypeptide chains (subunits), these folded units come together to form the quaternary structure. To prevent errors, chaperone proteins assist in the folding process by preventing misfolding and aggregation. They help the polypeptide chain achieve its correct conformation. The protein may undergo minor conformational changes and corrections to achieve its most stable and functional conformation. Chemical modifications, such as phosphorylation, glycosylation, or cleavage, can occur, further stabilizing the protein or preparing it for its specific function.
The formation of peptide bonds between amino acids requires significant energy. In prebiotic conditions, the availability of consistent and sufficient energy sources to drive these reactions is questionable. While various energy sources like lightning, UV radiation, and volcanic heat have been proposed, the efficiency and reliability of these sources in consistently facilitating peptide bond formation are debatable. Early Earth conditions were likely harsh and variable, with extreme temperatures, pH levels, and environmental changes. These conditions could have disrupted the delicate process of peptide bond formation and the stability of formed peptides.
Peptides and amino acids are subject to hydrolysis and degradation in aqueous environments. The stability of formed peptides over long periods is a concern, as they could degrade faster than they form. The lack of protective mechanisms in prebiotic conditions means that newly formed peptides could be rapidly broken down by environmental factors such as UV radiation and thermal fluctuations. While mineral surfaces like clays can catalyze peptide bond formation, the efficiency, specificity, and yield of these reactions under natural conditions are not well-demonstrated. It’s uncertain how effective these surfaces would be at producing a diverse range of peptides needed for life. The precise conditions under which these mineral-catalyzed reactions occur (e.g., temperature, pH) must be tightly controlled, and such conditions might not have been consistently present on early Earth. Some experiments demonstrating peptide formation were performed under highly controlled conditions, but these conditions may not accurately reflect the chaotic and variable conditions of early Earth.
Fig. 3.2. Protein synthesis
The RNA world hypothesis posits that RNA molecules catalyzed the formation of peptides. However, the simultaneous emergence of functional RNA and peptides poses a "chicken and egg" problem, with both being interdependent. Without RNA, proteins cannot be formed.
Proteins require amino acids with the same chirality (L-amino acids). Prebiotic synthesis typically produces racemic mixtures containing equal amounts of left- and right-handed isomers. The spontaneous formation of homochiral proteins from such mixtures is statistically improbable.
iv. The Formation of DNA
The formation of DNA in prebiotic conditions is a complex and speculative process that involves several key steps including nucleotide synthesis, formation of polynucleotide chains, base pairing, double helix formation, DNA condensation, and replication and enzymatic assistance.
Like RNA, the DNA nucleotides are composed of three parts: nitrogenous bases (adenine, guanine, cytosine, thymine), deoxyribose sugar, and phosphate groups. The difficulty level for the spontaneous formation of DNA will be comparable to that of RNA. One additional difficulty for DNA is the formation of DNA’s double-helix structure. The double-helix structure of DNA relies on precise base pairing between adenine and thymine, and between cytosine and guanine. Achieving this specificity spontaneously, without a guiding template or mechanism, is extremely improbable. For a stable double helix, nucleotides must be arranged in a specific order, with complementary sequences on opposite strands. The likelihood of spontaneously forming two complementary sequences that align perfectly is exceedingly low.
DNA replication requires complex enzymes and protein machinery to ensure accuracy and fidelity. The list of key enzymes involved in DNA replication includes helicase, single-strand binding (SSB) proteins, primase, DNA polymerase, ribonuclease H (RNase H), DNA ligase, and topoisomerase. The spontaneous formation of a double helix would not include these essential components, making replication and error correction highly improbable. Without mechanisms for error correction, any spontaneously formed DNA would likely accumulate errors rapidly, compromising its stability and functionality.
The total number of amino acids in the typical enzymes participating in DNA replication is in the range of hundreds to a few thousand. The probability of producing any of these enzymes by chance is virtually zero. For example, the probability of producing RNase H by random chance is only 20-155 or 2.2x10-202 ≈ 0. This incredibly small probability is essentially beyond the realm of practical occurrence and will never happen in nature.
Even if DNA were somehow formed, it would need to go through a very complex DNA condensation process. The DNA condensation process transforms a long, linear DNA molecule into a highly compact and organized structure capable of fitting within the cell nucleus. The condensation process is essential for efficient DNA storage, protection, and regulation, as well as for proper chromosome segregation during cell division. This process involves formation of nucleosomes, 30 nm fiber, looped domains, higher-order folding, and metaphase chromosomes.
The nucleosome can be formed if DNA winds around histone proteins. Each nucleosome consists of about 147 base pairs of DNA wrapped around an octamer of histones (two copies each of H2A, H2B, H3, and H4). The resulting structure looks like beads on a string, with nucleosomes (the beads) connected by linker DNA (the string).
The nucleosome chain further coils into a more compact 30 nm fiber, facilitated by the linker histone H1, which binds to the nucleosome and the linker DNA. The 30 nm fiber can adopt either a solenoid or zigzag configuration, depending on the nucleosome interactions.
The 30 nm fiber forms looped domains by attaching to a protein scaffold within the nucleus. Scaffold or matrix attachment regions (SARs/MARs) anchor these loops. These loops, typically 40-90 kilobase pair (kb) in length, provide further compaction and play a role in gene regulation by bringing distant regulatory elements into proximity with genes.
The looped domains further fold into thicker fibers, known as chromonema fibers. These fibers undergo additional coiling and folding, resulting in a more condensed structure.
During cell division, particularly in metaphase, chromatin reaches its highest level of condensation to form visible chromosomes. This involves the action of condensin proteins that help supercoil and compact the chromatin. Each chromosome consists of two identical sister chromatids held together at the centromere, ensuring accurate segregation during cell division.
The degree of condensation influences gene expression, with tightly packed heterochromatin being transcriptionally inactive and loosely packed euchromatin being active. Proper condensation is crucial for the accurate segregation of chromosomes during mitosis and meiosis.
As seen above, the formation and replication of DNA are highly complex, requiring precise biochemical coordination and the involvement of various enzymes. However, evolutionary theory provides no clear explanation for how these mechanisms originated, simply stating that DNA evolved from RNA without addressing critical challenges. For this claim to be valid, it must explain how RNA was formed, how DNA’s double-helix structure emerged, and how essential replication enzymes originated. Without these answers, the idea remains speculative. Considering these factors, the formation of DNA is the result of intentional design rather than random chance.
Fig. 3.3. DNA replication process
v. The Formation of Cells
To continue our discussion, let’s assume that RNA, proteins, and RNA were spontaneously produced. Then, the next step towards life is the formation of cells. There are two primary types of cells: prokaryotic and eukaryotic cells. Prokaryotic cells, found in organisms such as bacteria and archaea, are simpler and lack a defined nucleus. Their genetic material is contained in a single circular DNA molecule that floats freely in the cytoplasm. Prokaryotic cells also lack membrane-bound organelles. Eukaryotic cells, present in plants, animals, fungi, and protists, have a more complex structure. They contain a defined nucleus enclosed by a nuclear membrane. Eukaryotic cells also possess various membrane-bound organelles, such as mitochondria, endoplasmic reticulum, and Golgi apparatus, which perform specific functions essential for the cell's survival and proper functioning.
Scientists claim that protocells evolved into prokaryotic cells via gradual process driven by natural selection, mutation, and environmental adaptation. The existence of protocells, hypothetical precursors to modern cells, faces several significant criticisms. One major issue is the spontaneous formation of lipid bilayers, which are essential for creating a stable, enclosed environment. The conditions needed to form and maintain these bilayers consistently on the early Earth are highly speculative. Additionally, the integration of functional components, such as RNA or simple proteins, within these lipid structures requires highly specific interactions that are statistically improbable without some guiding mechanism. Furthermore, the ability of protocells to replicate and evolve, a key characteristic of living organisms, lacks sufficient experimental support, raising questions about their role in the origin of life. For these reasons, the first cells to appear on Earth would have been prokaryotic cells.
Fossil records suggest that prokaryotic cells appeared on Earth 3.5 to 3.8 billion years ago. All cells are enclosed by a cell membrane, and the first step in the formation of cells would be the formation of this membrane. Therefore, let’s investigate whether a cell membrane could form spontaneously under prebiotic conditions.
- Formation of Cell Membrane
The cell membrane is not simple but complex and dynamic structure composed of lipids (phospholipids, cholesterol, and glycolipids), proteins, and carbohydrates. Phospholipids form the fundamental bilayer structure, cholesterol modulates fluidity, and glycolipids contribute to cell recognition. Proteins, both integral and peripheral proteins, facilitate transport, signaling, and structural support, while carbohydrates play crucial roles in cell recognition and communication. This composition allows the cell membrane to perform its essential functions, maintaining homeostasis and facilitating interactions with the environment.
The formation of a cell membrane by random chance in prebiotic conditions faces several problems due to the complexity and specificity required for functional membrane structures.
The specific amphiphilic lipid molecules, such as phospholipids, require a precise combination of fatty acids, glycerol, and phosphate groups, which are unlikely to form and assemble spontaneously in the correct proportions under prebiotic conditions. The spontaneous formation of the phosphate group, as demonstrated in the previous section, is unlikely. While amphiphilic molecules can form bilayers spontaneously, achieving a stable, semi-permeable bilayer capable of encapsulating and protecting a cellular environment requires specific conditions. The random occurrence of these conditions, including the right concentration and types of lipids, is highly unlikely.
The typical size of a prokaryotic cell, such as a bacterial cell, is 1 micrometer. The surface area is 3x10-12 m2 and the size of a single phospholipid molecule is about 5x10-19 m2. So, the total number of phospholipids in the bilayer is 1.2x107. To form bilayers, approximately ten million phospholipids must align side by side and create an enclosed chamber. This is highly unlikely to occur by random chance because the bilayers would not naturally align and form an enclosed chamber without some form of guidance or direction.
Early Earth conditions were harsh and variable, with extreme temperatures, pH levels, and radiation. Maintaining the integrity and stability of a primitive membrane in such an environment would have been challenging, as membranes can easily be disrupted by these factors. A functional membrane must selectively allow essential nutrients and molecules to pass through while keeping harmful substances out. This selective permeability requires the presence of complex proteins and channels, which are unlikely to form and integrate into the membrane by random processes.
Even if primitive membranes did form, the random encapsulation of the necessary biomolecules, such as nucleotides, amino acids, and catalytic molecules, would be improbable. The specific concentrations and combinations required for initiating primitive metabolic processes are unlikely to occur by chance.
The formation of a functional membrane must be accompanied by the simultaneous development of other cellular machinery, such as transport proteins and metabolic enzymes, further complicating the scenario of membrane formation from random processes. Thus, the formation of prokaryotic cells under prebiotic Earth is not feasible.
vi. The Formation of Eukaryotic Cells
The widely accepted theory for the origin of eukaryotic cells is the endosymbiotic theory. The endosymbiotic theory suggests that eukaryotic cells originated through a symbiotic relationship between primitive prokaryotic cells. This process involved the engulfment of certain prokaryotic cells (mitochondria in the case of animal cells and chloroplasts in the case of plant cells) by an ancestral host cell, leading to a mutually beneficial relationship and eventually the development of complex eukaryotic cells. The ancestral host cell is claimed to be archaea, but the problems with this hypothesis are that endocytosis, the process of engulfing prokaryotic cells, has never been observed in archaea, and that the cell membrane of archaea is composed of ether bonds, whereas the cell membrane of eukaryotic cells is composed of ester bonds.
This theory requires pre-existing prokaryotic cells and mitochondria or chloroplasts. However, the origin of mitochondria and chloroplasts is not well documented. Mitochondria are complex organelles with a unique structure that reflects their role as the powerhouses of the cell, generating ATP through oxidative phosphorylation. Mitochondria are composed of several distinct components: the outer membrane, intermembrane space, inner membrane, and matrix, which includes enzymes, DNA, ribosomes, and metabolites. The outer membrane, like a cell membrane, contains a phospholipid bilayer with a mix of phospholipids and proteins. It is improbable that such a complex structure could arise spontaneously through random processes, as cell membranes, DNA, and proteins cannot form spontaneously. Mitochondria have their own DNA, distinct from nuclear DNA, yet they must coordinate with the nuclear genome for proper functioning. The integration of mitochondrial DNA into a host cell's regulatory and metabolic networks presents significant challenges.
The nucleus in eukaryotic cells is composed of a double-layered nuclear membrane, nucleoli, and chromosomes, which contain the cell's genetic material, including DNA, RNA, and associated proteins. The origin of the nucleus in eukaryotic cells is even more challenging to explain. Let's start by discussing the simplest aspect: the nuclear membrane. The origin of the nuclear membrane in eukaryotic cells is a subject of significant scientific debate. Several hypotheses, including the membrane invagination (inward folding) hypothesis, viral origin hypothesis, and gene transfer hypothesis, have been proposed to explain how this complex structure may have arisen.
The membrane invagination hypothesis suggests that the nuclear membrane originated from the invagination of the cell membrane of an ancestral prokaryotic cell. However, this hypothesis fails to explain the difference between the cell membrane and nuclear membrane. The cell membrane is composed of a single phospholipid bilayer, whereas the nuclear membrane consists of two phospholipid bilayers—an inner membrane and an outer membrane. In addition, the nuclear membrane contains nuclear pore complexes that cannot be found in the cell membrane. Furthermore, the protein compositions in the cell membrane and nuclear membrane are different.
The viral origin hypothesis posits that viruses that infected primitive cells could have contributed to genetic material or structural components that eventually led to the development of a nuclear envelope. The interaction between viral and host cell membranes might have created a protective structure around the DNA. Although viruses are known to influence host cell structures, concrete evidence linking viruses to the origin of the nuclear membrane is limited.
The gene transfer hypothesis suggests that the mixing and transfer of genes between different prokaryotes could have created a large and complex genome that required a protective compartment. The nuclear membrane would have evolved to protect and regulate this complex genetic material. This hypothesis faces many problems due to the lack of direct evidence, its inability to explain how such a complex and organized structure of a double membrane and nuclear pore complexes could arise solely from the transfer and integration of genes, and its failure to provide a clear pathway for how transferred genes would be integrated and expressed in a way that results in the nuclear membrane's development.
The structure of nucleoli and chromosomes is far more complex than that of the nuclear membrane, making it difficult to imagine that they could originate from random events. Furthermore, it is challenging to understand how these components became enclosed within the membrane. Nucleoli and chromosomes contain the genetic information of living organisms, including the blueprints for forming RNA, proteins, DNA, cellular organelles, and the tissues and organs of living beings. The fact that these blueprints for constructing life are predicted and already present within the nucleus at the eukaryotic cell stage, even before the formation of life, cannot be adequately explained by evolutionary theory. Instead, this serves as clear evidence of the intelligent design of life.
In summary, intelligent design can naturally explain the origin of eukaryotic cells, whereas evolution theory lacks a clear explanation for their origin.
vii. Organelle localization
Cells are composed of various organelles, including the nucleus, mitochondria, endoplasmic reticulum, Golgi apparatus, lysosomes, and other organelles, all working together to maintain cellular function and homeostasis. Cell organelle localization is a highly regulated and dynamic process that ensures organelles are positioned optimally within the cell to maintain efficient cellular function. Proper localization is essential for cellular health and plays a critical role in adapting to changing cellular and environmental conditions. One might wonder how these organelles find their optimal locations, given that they cannot think for themselves.
Fig. 3.4. Structure of Animal Cell and Plant Cell
A detailed examination of the organelle localization process reveals a highly precise and intricate mechanism that cannot be attributed to random chance. This process involves a complex interplay of the cytoskeleton, motor proteins, membrane trafficking, anchor proteins, scaffolds, dynamic adjustments, and inter-organelle communication.
The cytoskeleton plays a crucial role in organelle localization. It provides structural support, facilitates movement, and ensures the proper positioning of organelles. The cytoskeleton is composed of three main types of filaments: microtubules, actin filaments, and intermediate filaments, each contributing uniquely to organelle localization.
Fig. 3.5. Schematic diagram of the microtubule and motor proteins
Microtubules are long, hollow tubes made of tubulin proteins. They form a network extending from the microtubule-organizing center (centrosome) to the cell periphery. Microtubules serve as tracks for motor proteins such as kinesin and dynein. Kinesin moves organelles toward the plus end of microtubules, typically toward the cell periphery, while dynein moves them toward the minus end, usually toward the cell center. Microtubules help position organelles such as the Golgi apparatus, which is typically located near the centrosome, and mitochondria, which are distributed throughout the cell but can be transported along microtubules to areas with high energy demand.
Actin filaments, also known as microfilaments, are thin, flexible fibers made of actin protein. They are concentrated just beneath the plasma membrane and form a dense network throughout the cytoplasm. Actin filaments facilitate cytoplasmic streaming, a process that helps distribute organelles and nutrients throughout the cell. Myosin motor proteins interact with actin filaments to transport vesicles, endosomes, and other small organelles along the actin network. Actin filaments help maintain cell shape and are involved in cell movement, which indirectly affects the positioning of organelles.
Intermediate filaments are rope-like fibers made of various proteins (such as keratins, vimentin, and lamins) depending on the cell type. They provide mechanical strength and structural support. Intermediate filaments help stabilize the position of organelles such as the nucleus by anchoring them in place within the cytoplasm. They maintain the overall integrity of the cytoskeleton, ensuring that other components like microtubules and actin filaments can function effectively in organelle localization.
The different types of cytoskeletal filaments often work together to position organelles accurately. For example, microtubules and actin filaments coordinate to ensure proper distribution and movement of vesicles and organelles. The cytoskeleton is highly dynamic, continuously remodeling to adapt to the cell’s needs. This flexibility allows for the rapid repositioning of organelles in response to cellular signals or changes in the environment.
Membrane trafficking is the process by which proteins, lipids, and other molecules are transported within cells, ensuring that cellular components reach their correct destinations. This involves the budding of vesicles from donor membranes, their transport through the cytoplasm, and their fusion with target membranes. Key organelles involved in membrane trafficking include the endoplasmic reticulum, Golgi apparatus, and various types of vesicles like endosomes and lysosomes. The process is essential for maintaining cellular organization, facilitating communication between organelles, and enabling the cell to respond to internal and external signals efficiently.
Signaling pathways guide the movement and positioning of organelles within the cell. These pathways involve the transmission of chemical signals that provide spatial cues, ensuring that organelles are directed to their appropriate locations. Receptors on organelle surfaces and within the cytoplasm interact with signaling molecules to facilitate this process. For example, the small GTPases like Rab proteins are key regulators that control vesicle trafficking and organelle positioning by interacting with specific effector proteins. These signaling pathways ensure that cellular processes are coordinated and that organelles are dynamically positioned in response to changing cellular needs and environmental conditions.
Anchor proteins and scaffolds play a vital role in cell localization by ensuring that organelles are precisely positioned within the cell. Anchor proteins connect organelles to specific sites within the cytoplasm, stabilizing them and preventing their displacement. For instance, mitochondria can be tethered to the endoplasmic reticulum through specific anchoring mechanisms, facilitating efficient energy transfer and metabolic coordination. Scaffold proteins provide structural support by forming complexes that hold organelles in place, maintaining the overall organization of the cell. These proteins create a dynamic framework that allows for the proper arrangement of organelles, ensuring that cellular functions are carried out effectively and efficiently.
Dynamic adjustments in cell localization refer to the continuous and responsive changes in the positioning of organelles within a cell. These adjustments are crucial for maintaining cellular function and adaptability. During different phases of the cell cycle, such as mitosis, organelles like the nucleus and mitochondria reposition to ensure proper cell division. Additionally, in response to environmental stimuli, such as nutrient availability or stress conditions, organelles can relocate to areas where their functions are most needed. This dynamic relocation is facilitated by the cytoskeleton and motor proteins, allowing the cell to maintain homeostasis and efficiently respond to changing internal and external conditions.
Inter-organelle communication ensures coordination and efficiency of cellular functions. This communication occurs through direct contact sites and vesicular transport. Contact sites, such as mitochondria-associated membranes (MAMs) between mitochondria and the endoplasmic reticulum, facilitate the transfer of lipids, calcium, and other molecules, ensuring synchronized activities between organelles. Vesicular transport involves the budding off and fusion of vesicles, which carry proteins and lipids between organelles, maintaining their functional integration. Effective inter-organelle communication is essential for processes such as metabolism, signaling, and stress responses, contributing to the overall homeostasis of the cell.
As described above, the mechanisms involved in organelle localization are highly organized and complex. The step-by-step evolution of such intricately coordinated systems through random mutations and natural selection is extremely unlikely because of the following reasons.
There is no direct evidence of intermediate stages in the evolution of organelle localization mechanisms. Fossil records and molecular studies do not capture the transitional forms that would illustrate the gradual evolution of these sophisticated systems. The complexity of organelle localization and its coordination within cells poses a challenge to evolutionary explanations since cellular organization exhibits "irreducible complexity," where the removal of any part would render the system non-functional. Evolutionary theory explains complexity through gradual modifications, but cellular structures and their precise localization do not have viable intermediate stages.
The localization of organelles depends on intricate interactions with the cytoskeleton, motor proteins, signaling pathways, and other cellular components. This interdependence raises questions about how such systems could have co-evolved in a stepwise manner. It is challenging to explain how both the organelles and the systems responsible for their localization could have evolved concurrently without one being fully functional first.
The origin and evolution of motor proteins like kinesin, dynein, and myosin, as well as cytoskeletal elements like microtubules and actin filaments, are not fully understood. These proteins and structures must have evolved highly specific functions and interactions, which are difficult to explain through incremental changes alone. The evolution of the complex regulatory networks that control organelle localization poses significant challenges. These networks must precisely coordinate the expression and activity of numerous genes, and their incremental evolution through random mutations is difficult to explain.
Many components involved in organelle localization are interdependent, meaning that they must function together effectively to provide any selective advantage. The simultaneous evolution of multiple interacting parts is problematic because partial systems will not confer a sufficient benefit to be favored by natural selection.
The processes of organelle localization and maintenance are energy intensive. It is not clear how early cells could afford the metabolic costs associated with these complex systems without already having efficient energy production and resource management mechanisms in place.
viii. Cell Differentiation
Cell differentiation is the process by which unspecialized cells develop into specialized cells with distinct structures and functions. This process is crucial for the development, growth, and functioning of tissues, organs, and ultimately, multicellular organisms. Differentiation typically begins with stem cells, which are undifferentiated cells capable of giving rise to various cell types. Stem cells can be pluripotent, able to differentiate into almost any cell type. During development, these cells receive signals that guide them to become specific cell types. As stem cells differentiate, they become multipotent progenitor cells, which are committed to giving rise to a limited range of cell types. Progenitor cells further differentiate into fully specialized cells. Cell differentiation is a highly regulated and dynamic process driven by gene expression regulation, signal transduction pathways, epigenetic modifications, morphogen gradients, and interactions with other cells and the extracellular matrix.
All cells in an organism contain the same DNA, but different cell types express different subsets of genes. This selective gene expression drives differentiation. Proteins known as transcription factors bind to specific DNA sequences to regulate the transcription of target genes. These factors can activate or repress gene expression, leading to the production of proteins necessary for a specific cell type.
Cells receive signals from their environment, such as growth factors, hormones, and cytokines. These signals bind to cell surface receptors, initiating signal transduction pathways. Signal transduction pathways involve a cascade of intracellular events, often including phosphorylation of proteins, which ultimately result in changes in gene expression.
Epigenetic modifications involve DNA methylation and histone modification. DNA methylation silence gene expression by adding methyl groups to DNA, usually at CpG islands. Methylation patterns are heritable and can lock in a cell's identity by repressing genes that are not needed for a particular cell type. Histones, the proteins around which DNA is wound, can be chemically modified (e.g., acetylation, methylation). These modifications alter chromatin structure, making DNA accessible for transcription.
Morphogens are signaling molecules that diffuse through tissues and form concentration gradients. Cells respond to different morphogen concentrations by activating different developmental pathways, leading to diverse cell fates. Morphogen gradients are crucial in embryonic development for pattern formation, determining the spatial arrangement of differentiated cells.
Direct contact between cells can induce differentiation. Membrane-bound proteins on one cell interact with receptor proteins on an adjacent cell to transmit signals. Cells secrete signaling molecules that affect nearby cells, influencing their differentiation.
The extracellular matrix (ECM), composed of proteins and polysaccharides, provides structural support and biochemical signals to cells. Integrins and other adhesion molecules mediate the attachment of cells to the ECM, influencing cell shape, migration, and differentiation.
Positive and negative feedback mechanisms control differentiation progress. Positive feedback indicates that differentiated cells can produce signals that reinforce their identity, ensuring stable cell types. Negative feedback mechanisms limit differentiation signals, preventing over-differentiation and maintaining a pool of undifferentiated cells.
As described, cell differentiation involves a highly complex and coordinated series of events, including precise gene regulation, signal transduction, and epigenetic modifications. Such complexity is difficult to explain through gradual, random mutations and natural selection alone. The process requires the integration of numerous cellular systems, such as transcription factors, signaling pathways, and cytoskeleton. The simultaneous evolution of these interdependent systems poses a significant challenge to evolutionary theory. In addition, the origin of pluripotent stem cells cannot be explained by evolutionary mechanisms.
The role of epigenetic modifications, such as DNA methylation and histone modification, is crucial in differentiation. The origin of these sophisticated mechanisms is not well-explained by evolutionary theory, as they require a high level of precision and coordination. The heritability of epigenetic marks adds another layer of complexity. The mechanisms by which these marks are established, maintained, and inherited are intricate and require detailed explanation.
The establishment and interpretation of morphogen gradients are critical for pattern formation during development. The precise concentration gradients and the cell's ability to accurately interpret these signals suggest intelligent design rather than random mutations. The concept of positional information, where cells determine their location and differentiate accordingly, requires a sophisticated communication system. The evolutionary origin of such a system is not clearly understood.
The regulatory networks of transcription factors controlling gene expression during differentiation are highly complex. The incremental evolution of these networks lacks empirical support, given the need for coordinated changes in multiple genes. Mutations in key transcription factors can have widespread and deleterious effects, making it difficult to envision how beneficial mutations could accumulate gradually to form functional regulatory networks.
ix. The Formation of Tissues and Organs
The formation of tissues (histogenesis) is the process by which differentiated cells organize into specific tissues during embryonic development.
This process involves the specialization of stem cells into various cell types, such as muscle cells, nerve cells, and epithelial cells, each with distinct functions. Once cells differentiate, they begin to arrange themselves into complex structures that form the basic tissues of the body. These tissues include epithelial, connective, muscle, and nervous tissues, each contributing to the overall structure and function of organs.
Cellular communication and signaling pathways play a crucial role in guiding cells to their correct locations and ensuring they interact appropriately. Histogenesis is tightly regulated, as errors in cell organization can lead to developmental abnormalities or diseases. Throughout this process, cells adhere to one another, migrate to specific regions, and undergo morphological changes to form functional tissue structures. The completion of histogenesis results in the formation of fully developed tissues that are capable of performing specialized functions. This process is fundamental to the proper development of organs and the overall organization of the body.
The formation of organs (organogenesis) follows histogenesis, where tissues are organized into functional units. During organogenesis, the three germ layers—ectoderm, mesoderm, and endoderm—interact and differentiate further to form specific organs. The ectoderm primarily forms organs like the brain and spinal cord, while the mesoderm gives rise to the heart, kidneys, and skeletal muscles. The endoderm forms internal structures like the lungs and liver.
Organogenesis involves complex signaling pathways and genetic regulation to ensure organs develop in the correct location and with proper function. During organogenesis, cells migrate, proliferate, and undergo apoptosis as necessary to shape the developing organs. The Notch signaling pathway is particularly important in determining cell fate and maintaining the balance between cell proliferation and differentiation. Wnt signaling contributes to the patterning and morphogenesis of organs, ensuring that tissues develop in the correct locations and proportions. Disruptions in these signaling can lead to congenital defects or abnormal organ development. This process is crucial for establishing the body's overall anatomy and physiology.
As organs develop, multiple tissue types integrate and function together. For instance, an organ like the heart consists of muscle tissue, connective tissue, and nerve tissue, all of which are essential for its function. The development of these organs is guided by complex signaling pathways that ensure cells migrate to the correct locations, differentiate appropriately, and form the correct structures.
Evolutionary theories explaining the formation of tissues and organs face significant challenges. The complexity of tissues and organs is too great to be explained by gradual, step-by-step evolutionary processes. Many tissues and organs exhibit "irreducible complexity," meaning they consist of multiple interdependent parts that could not function if any part were missing. Such complex structures could not have evolved incrementally, as they would be non-functional at intermediate stages.
Evolutionary theory posits that new structures, such as tissues and organs, arise through gradual modification of existing structures. However, this does not adequately explain the origin of entirely new structures that have no apparent precursors. For instance, the development of complex organs like the brain or the immune system is seen as difficult to explain through small, incremental changes.
The genetic information required to build and organize tissues and organs is vast and highly specific, and it is unlikely for such detailed information to arise through random mutations.
Epigenetic factors, which influence gene expression without changing the DNA sequence, play a significant role in the development of tissues and organs. Evolutionary theory, which primarily emphasizes genetic mutations, does not fully account for the added complexity introduced by epigenetic regulation. It also falls short in explaining how complex biological systems (comprising multiple interacting tissues and organs) could evolve independently and later integrate to function cohesively as a unified organism.
x. The Formation of Multicellular Organism
Once individual organs are formed, they must be integrated into a cohesive, functioning organism. This integration is achieved through the spatial organization of organs within the body, where each organ occupies a specific location that allows it to interact with other organs and systems. For example, the circulatory system, which includes the heart and blood vessels, must be properly connected to other systems such as the respiratory and digestive systems to support life.
Throughout this process, the cells within tissues and organs continue to specialize and adapt to their roles, a process known as functional differentiation. This ensures that each part of the organism performs its designated functions effectively. The coordination and interaction between different organs and systems are essential for maintaining the overall health and function of the multicellular organism, allowing it to survive, grow, and reproduce. The evolutionary explanation of the formation of multicellular organisms from organs involves addressing several key challenges and complexities:
The formation of multicellular organisms from organs requires an incredibly high level of integration and coordination among various systems. The evolutionary processes that could lead to the simultaneous development and seamless functioning of multiple organ systems are difficult to explain.
Organs and systems within multicellular organisms are highly interdependent, meaning the functionality of one system often depends on the proper functioning of others. Evolutionary explanations must account for the simultaneous development of different organs and systems, each with specific functions and interdependencies, and explain how these complex systems evolved in a coordinated, step-by-step manner. Intermediate forms with partially developed systems would not provide sufficient advantages to be favored by natural selection.
There is a scarcity of clear transitional forms in the fossil record that illustrates the gradual evolution of simple multicellular organisms into complex organisms with fully formed organs. This gap makes it difficult to trace the evolutionary pathways that led to the development of such complex structures.
The precise coordination of gene expression and developmental pathways necessary for organ formation and integration presents significant challenges. Small errors in these processes can lead to developmental disorders, raising questions about how such delicate systems could evolve incrementally.
The development of complex multicellular organisms requires robust mechanisms to handle errors and variations. The evolutionary explanation must account for how these error-handling systems evolved and how they ensure stability and fidelity of organ formation and function.
b. Can Evolution Explain the Origin of Life?
In the previous section, we discussed the origin of life, tracing its progression from the formation of amino acids, RNA, proteins, DNA, prokaryotic cells, eukaryotic cells, tissues, and organs, ultimately leading to multicellular organisms. These processes have undeniably progressed in a manner that is directed and guided toward a singular purpose—the formation of living organisms.
This raises an important question: Can evolution, which operates through undirected and random processes, adequately explain these complex developments and the origin of life? Evolutionary scientists have proposed various theories to address this question. The primary theories of evolution include natural selection, mutation, genetic drift, and horizontal gene transfer. Let’s take a brief look at each of these theories.
Natural selection is the process where individuals with advantageous traits survive and reproduce more successfully, leading to those traits becoming more common in a population over generations. Natural selection operates on existing variations in living organisms. Thus, the origin of life and the formation of its fundamental building blocks (amino acids, RNA, proteins, DNA) and structures (cells, tissues, organs, and multicellular organisms) require explanations beyond natural selection, as these processes lack the necessary preconditions (replication and functionality) for selection to act.
Mutation is random changes in an organism's DNA that can introduce genetic variation, sometimes leading to new traits or adaptations. Mutation faces challenges because most mutations are harmful or neutral rather than beneficial, making it unlikely for advantageous mutations to occur frequently enough to drive significant evolutionary change. For example, a study on the distribution of fitness effects (DFE) of random mutations in vesicular stomatitis virus illustrates this issue. Out of all mutations, 39.6% were lethal, 31.2% were non-lethal deleterious, and 27.1% were neutral.
Fig. 3.6. Distribution of fitness effect
If nucleotides are inserted or deleted (causing frameshift mutations), or if stop codons are created or removed by mutations, non-functional proteins are produced. This is a primary reason why, considering the large number of amino acids in proteins of living organisms (for example, from 20 to 33,000 in human proteins), the likelihood of macroevolution occurring through such random mutations is impossible (cf, Section ‘d’ in this Chapter for more details). Additionally, random mutations cannot account for the initial emergence of life from non-living matters.
Genetic drift relies on random changes in allele frequencies, which may not sufficiently explain the adaptive complexity observed in organisms. Genetic drift is more pronounced in small populations, making its impact less relevant in larger populations where most evolution occurs. Additionally, it lacks the directional force needed to account for the development of highly organized structures and systems. Furthermore, genetic drift cannot produce new information or functions, thus failing to explain the emergence of novel traits or the origin of complex biological features.
Horizontal gene transfer (HGT) is the transfer of genetic material between unrelated organisms, not through inheritance, contributing to genetic variation. HGT faces issues when explaining complex traits in multicellular organisms because HGT's role is limited primarily to prokaryotes, with less impact on higher organisms. The integration of foreign genes into a host's genome often requires precise regulatory mechanisms, which are unlikely to evolve simultaneously. Additionally, HGT can introduce genetic instability, potentially leading to harmful mutations. The random nature of gene acquisition through HGT also raises questions about its ability to produce coordinated and functional adaptations. HGT does not explain the origin of new genes but rather the transfer of existing ones, failing to address the emergence of novel traits.
The following table summarizes the applicability of evolutionary theories to biogenesis and genetic processes.
Theories of evolution | Can explain biogenesis? | Can explain formation of RNA, protein, DNA? | Genetic adaptation, not evolution?* |
Natural selection | No | No | Yes |
Mutation | No | No | Yes |
Genetic drift | No | No | Yes |
Horizon. gene transfer | No | No | N/A |
Table 3.2. Theories of evolution: applicability to biogenesis and genetics (*: see next section for genetic adaptation)
As shown in the table, major evolutionary theories fail to explain the origin of life on Earth and the mechanisms behind the formation of fundamental biological components such as RNA, proteins, and DNA. This suggests that the evolutionary models applied to cells, tissues, organs, and existing life forms do not constitute true explanations for the origin or evolution of life itself. Rather than addressing the emergence of life from non-living matter, these theories merely describe how life develops once the essential building blocks—RNA, proteins, and DNA—are already in place, much like detailing the assembly process of a car or the construction of a building without explaining how the raw materials and parts came to exist.
Evolutionary theories applied to living organisms primarily describe the genetic and biochemical processes that enable them to adapt to changing environments. However, these adaptations and behaviors are not newly created by evolution but are already encoded within their genetic information. Given this limitation, evolutionary theories would be more accurately termed to "Genetic adaptation theory" (see next section), as they primarily address the ways in which organisms adjust to environmental pressures through pre-existing genetic mechanisms.
Despite these critical limitations, the theory of evolution has been excessively promoted, creating widespread misconceptions. Many people now mistakenly believe that it can explain the transition from non-living matter to living organisms and the development of complex life forms.
To build a building, we need blueprints, construction materials, and a solid foundation to start with. Evolutionary theories are akin to trying to construct a building without blueprints (directionality), construction materials (RNA, proteins, DNA), and a foundation (the initial origin of life). Without these, buildings cannot be constructed.
Just as we recognize that the blueprints of a building were designed by an architect, we should also acknowledge that all living organisms were designed and created by God, the divine Creator.
c. Darwin’s Theory: Theory of Evolution or Theory of Genetic Adaptation?
Evolution is broadly categorized into two types: microevolution and macroevolution. Microevolution refers to small-scale changes within a species over time. These changes are observable within short time spans and often involve adaptation to the environment. Macroevolution, on the other hand, involves large-scale changes that occur over long geological periods, leading to the formation of new species and broader taxonomic groups.
Evolutionary biologists propose that the primary mechanism for macroevolution is the accumulation of numerous microevolutionary changes over time. People agree that there is evidence of microevolution, but no convincing evidence of macroevolution. If Darwinism were to be called the theory of evolution, it must show the evidence of macroevolution. The most convincing evidence of macroevolution is the existence of transitional species. Chapter 6 (Difficulties for the Theory) of Darwin’s book ‘On the Origin of Species’, it is written: “why, if species have descended from other species by insensibly fine graduations, do we not everywhere see innumerable transitional forms.”. This lack of evidence for transitional species is often referred to as "Darwin’s dilemma."
Fossils often labeled as "transitional" could simply be variations within a species or unrelated forms altogether. This ambiguity makes it difficult to conclusively identify true transitional forms. For example, Tiktaalik is widely considered a transitional fossil and regarded as one of the most significant discoveries in the study of vertebrate evolution. However, Nature paper published by Niedzwiedzki et al. reveals well-preserved tetrapod trackways that are predating Tiktaalik by about 18 million years. The trackways discovered suggest that fully developed tetrapods were already walking on land significantly earlier than previously believed. Since Tiktaalik dates to around 375 million years ago, the presence of older tetrapod trackways challenges its role as a direct transitional form between fish and tetrapods.
If there’s no convincing evidence for transitional species, Darwin’s theory was misnamed and should be called the theory of genetic adaptation rather than theory of evolution. The reason is related to the Milankovitch cycles, which influence climate patterns and have played a role in shaping genetic adaptations over time.
- Milankovitch Cycles
Earth's eccentricity fluctuates from nearly circular to more elliptical over a 100,000-year cycle. The change of eccentricity influences climatic patterns, contributing to the timing of glacial and interglacial periods.
Earth’s axial tilt (obliquity) varies between 22.1 degrees and 24.5 degrees over a 41,000-year cycle. This tilt affects the distribution of solar radiation between the equator and poles, influencing the intensity of seasons and plays a crucial role in long-term climate patterns and ice age dynamics.
The precession of Earth's rotation axis involves the gradual change in the orientation of the axis over a 26,000-year cycle. This wobble causes the timing of the seasons to shift relative to Earth's position in its orbit. This mechanism alters the intensity and timing of seasons, impacting the Earth's overall climate system.
The combined effects of changes in eccentricity, axial tilt, and the precession of the rotation axis are collectively known as the Milankovitch cycles. These cycles cause long-term global climate changes. The Sahara Desert is a good example of climate change. During periods of increased solar radiation, the Sahara experiences more rainfall, transforming it into a lush, green landscape with lakes and rivers. Conversely, decreased solar radiation results in arid conditions, turning the region into the vast desert seen today.
Fig. 3.7. Components of Milankovitch cycles
When such changes occur, all living organisms on Earth adjust their body to the changing environments through genetic adaptation. Genetic adaptation is an incredible mechanism encoded in DNA to enable living organisms to survive on Earth for extended periods without becoming extinct. Evolutionists named this adaptability as microevolution, but it should be termed the ‘genetic adaptation’. Let me illustrate some examples that could support the concept of the 'theory of genetic adaptation.'
- Genetic Adaptation to UV radiation
If human skin is exposed to strong UV radiation due to climate change, a complex mechanism involving several proteins and hormones triggers increased melanin production through the activation of specific genes.
Fig. 3.8. Melanin production mechanism
UV radiation causes DNA damage in skin cells. This damage activates the p53 protein, which is a crucial regulator of the cell's response to stress and damage. The activated p53 protein acts as a transcription factor, promoting the expression of various genes involved in the protective response to UV damage. P53 stimulates the expression of the pro-opiomelanocortin (POMC) gene. POMC is a precursor polypeptide that can be cleaved into several smaller peptides with different functions. POMC is processed into multiple peptides, including adrenocorticotropic hormone (ACTH) and melanocyte-stimulating hormone (MSH).
MSH binds to the melanocortin 1 receptor (MC1R) on the surface of melanocytes, the cells responsible for producing melanin. The binding of MSH to MC1R activates the receptor, which triggers a signaling cascade inside the melanocytes. Activation of MC1R leads to the upregulation of genes involved in the synthesis of melanin. Melanocytes increase the production of melanin, a pigment that absorbs and dissipates UV radiation, thereby protecting skin cells' DNA from further UV-induced damage.
Melanin is packaged into melanosomes, which are then transported to keratinocytes, the predominant cell type in the outer layer of the skin. The melanin forms a protective cap over the nuclei of keratinocytes, effectively shielding the DNA from UV radiation.
This is one of the examples of gene adaptation in response to changing environment over a relatively short period of time.
- Genetic Adaptation to the Arctic Environment
The Inuit have developed genetic adaptations that enable them to thrive in the harsh Arctic environment. Key adaptations include variants in the fatty acid desaturase (FADS) gene cluster, which enhance their ability to metabolize omega-3 and omega-6 fatty acids from their traditional high-fat diet of marine mammals. Additionally, genetic changes in the carnitine palmitoyltransferase 1A (CPT1A) gene improve energy production from fats, crucial for maintaining body heat. These adaptations reduce the risk of cardiovascular diseases despite a high-fat diet. Moreover, the adaptation in genes regulating brown fat activity enhances thermogenesis, helping the Inuit generate heat and maintain body temperature in extreme cold. These genetic adaptations collectively support their survival in cold weather conditions. These changes seem to date from at least 20,000 years ago, when Inuit ancestors lived around the Bering Strait between Russia and Alaska. This is another example of genetic adaptation to a changing environment.
Fig. 3.9. Inuit whose genes were adapted to cold environment
- Brown Bear to Polar Bear via Genetic Adaptation
The transition from brown bears to polar bears is a good example of genetic adaptation driven by environmental pressures. Approximately 400,000 years ago, a population of brown bears became isolated in the Arctic, where they faced different survival challenges. Genetic changes that conferred advantages in the harsh, icy environment were naturally selected over time.
Fig. 3.10. Brown bear and polar bear
Key adaptations include changes in genes related to fat metabolism, such as the apolipoprotein B (APOB) gene, which improved the ability to process a high-fat diet from seals, their primary food source. Adaptations in genes like endothelin receptor type B (EDNRB) and absent in melanoma 1 (AIM1) also led to the development of white fur, providing camouflage against the snow and ice. Additionally, genetic changes affecting the bear's skeletal structure and limb morphology enhanced their swimming abilities, crucial for hunting in Arctic waters.
These genetic adaptations allowed polar bears to efficiently exploit Arctic resources, survive in extreme cold, and become distinct from their brown bear ancestors. It is important to note that despite 400,000 years of genetic changes, they remain bears and have not transformed into a different species.
- Change of Beaks in Finches via Genetic Adaptation
The change in beak size and shape in Darwin's finches is a classic example of genetic adaptation in response to environmental pressures. On the Galápagos Islands, finches have changed various beak forms to exploit different food sources. During drought periods, when hard seeds are the primary food source, finches with larger, stronger beaks are more likely to have selective advantage and reproduce. Conversely, when the environment shifts to favor softer foods, finches with smaller, more agile beaks have a selective advantage. These adaptations are the result of changes in specific genes, such as the aristaless-like homeobox 1 (ALX1) gene, which influences beak shape, and the high mobility group AT-hook 2 (HMGA2) gene, which affects beak size.
Change in environment acts on these genetic variations, leading to a diversity of beak forms suited to different ecological niches. Over generations, these genetic adaptations enable finches to exploit available resources efficiently, demonstrating how genetic changes can drive diverse beak shapes and sizes in response to environmental challenges. Finches have lived on the Galápagos Islands for around 2 million years. Despite this long period, they have remained finches and have not transformed into a different species (i.e. no macroevolution).
Fig. 3.11. Beaks of the Galapagos finches
In conclusion, Darwin’s ‘theory of evolution’ should be called ‘theory of genetic adaptation’, as there is no convincing evidence of macroevolution. Microevolution refers to small-scale changes in allele frequencies within a population over time, while genetic adaptation specifically describes changes that enhance an organism's ability to survive and reproduce in its environment. Therefore, when discussing changes that confer a survival advantage, the term "genetic adaptation" would be more appropriate and accurate.
d. Did We Evolve from the Apes?
Anthropologists suggest that human evolution started from Hominoidea around 20.4 million years ago. The Hominoidea diverged into Hominidae and Hylobatidae (gibbons). The Hominidae then split into Homininae and Ponginae (orangutans). The Homininae further diverged into Hominini and Gorillini (gorillas). The Hominini split into Hominina (Australopithecina) and Panina (chimpanzees). The Hominina eventually diverged into Australopithecus and Ardipithecus. Humans evolved from Australopithecus about 2.5 million years ago through Homo habilis, Homo erectus, and Homo sapiens.
Fig. 3.12. Did we evolve from apes?
Let us discuss whether humans could have evolved from Australopithecus (apes) through genetic changes over the last 2.5 million years. Human genetic maps exist, but no genetic maps are available for Australopithecus. Lucy, the most famous Australopithecus, had a brain size comparable to that of modern chimpanzees. Therefore, let’s assume that the genes of Australopithecus are similar to those of chimpanzees. The DNA sequences of humans and chimpanzees differ by about 1.23% due to single nucleotide polymorphisms (SNPs), which are single base pair changes in the DNA sequence. When considering insertions and deletions (indels) of base pairs in the genome, the total difference increases. Indels are segments of DNA that are present in one species but absent in the other. These can account for an additional 3% difference in the genome. Overall, while humans and chimpanzees share about 98-99% of their DNA sequences, the remaining 1-2% difference, along with variations in gene regulation, account for the significant physical, cognitive, and behavioral differences between the two species.
It is known that the mutation rate in chimpanzees is approximately 1 mutation per 100 million base pairs per generation, comparable to the mutation rate in humans. If we assume that one generation of Australopithecus is 25 years, then 100,000 generations will have passed in 2.5 million years. During this period, the total mutation rate would be 0.1% (100,000 / 100 million). This mutation rate is only 10% of the genetic difference between humans and chimpanzees. Thus, it seems unlikely that Australopithecus could evolve into humans within 2.5 million years. This estimation assumes that all mutations are beneficial, even though most mutations are harmful.
This argument can also be examined by considering the alteration of codons through random genetic mutations. Both humans and chimpanzees have approximately 20,000 to 25,000 protein-coding genes. Due to alternative splicing and post-translational modifications, each gene can produce multiple protein variants, resulting in an estimated 80,000 to 100,000 unique functional proteins. The number of amino acids in human proteins ranges from 20 to 33,000. Assuming that 1% of genes differ between humans and chimpanzees, and both species have 20,000 protein-coding genes with an average of 100 amino acids per protein, we would expect each protein in chimpanzees to require one amino acid mutation to match its human counterpart.
For these mutations to occur in the chimpanzee DNA, they would need to avoid mutating codons to stop codons (UAA, UAG, UGA) among the 64 possible codons because such changes will result in non-functional proteins. The probability of achieving this 1% mutation rate across 20,000 proteins without mutating into stop codons and chimpanzee’s own codon is (60/64)20000 = 10-561. Even without considering frameshift mutations (insertions or deletions of nucleotides), this probability is extraordinarily low and practically impossible to occur by random chance. This argument suggests that macroevolutionary changes, such as the transition from Australopithecus to humans, are virtually impossible through random mutations.
e. Intelligent Design
Intelligent design, often considered synonymous with creationism, is the scientific theory that the universe and living organisms are best explained by an intelligent cause rather than by undirected processes such as natural selection or random process. A notable case related to intelligent design is the 2005 federal court trial held in Dover, Pennsylvania, USA. This trial began when parents filed a lawsuit claiming that teaching intelligent design in public schools violated the Constitution. The parents argued that intelligent design is inherently religious in nature and that teaching it in public schools contravened the Establishment Clause of the U.S. Constitution, which mandates the separation of church and state.
During the trial, supporters of intelligent design and evolution presented their respective arguments. A prominent figure representing intelligent design was biochemist Michael Behe, who asserted that the complex structures of living organisms could not be explained by natural selection alone and suggested the possibility that certain features were shaped by an intelligent cause.
However, the court rejected the arguments of Behe and other proponents of intelligent design, instead accepting the positions of evolution advocates. The judge ruled that teaching intelligent design was unconstitutional, thereby deeming the instruction of intelligent design in Dover public schools illegal.
The major issue with this ruling lies in the court's uncritical acceptance of the arguments made by proponents of evolution and the related scientific papers. These papers implicitly assumed that life arose by random chance, and misinterpreted genetic adaptation to the environment as evidence of evolution. However, as summarized in Table 3.2, evolutionary theories apply only to existing living organisms and cannot account for the origin of life. Additionally, evolutionary theories merely describe the behavior of genes that are already embedded within the genetic code. Yet, the court failed to consider these scientific facts in its decision, raising significant concerns about the fairness of the ruling.
William Paley, an 18th-century philosopher, is a foundational figure in this argument, famously illustrating it with his watchmaker analogy. Paley argued that just as a watch’s complexity implies a designer, so too does the complexity of life and the universe imply the divine Creator. His ideas laid the groundwork for modern intelligent design theory. The key concepts of intelligent design include specified complexity, irreducible complexity, and fine-tuning. Several examples of fine-tuning were shown in Chapters 1 and 2. Now, let us examine specified complexity and irreducible complexity in detail.
i. Specified Complexity
Specified complexity, a key concept in intelligent design, posits that certain patterns in nature are both highly complex and specifically arranged to fulfill a particular function, indicating purposeful design. Unlike random complexity, specified complexity is not only intricate but also ordered in a way that achieves a specific outcome. This dual characteristic suggests that such patterns are unlikely to have arisen by chance alone.
One of the examples of specified complexity is the structure of DNA. The sequence of nucleotides in DNA is highly complex, with billions of potential combinations in even a single strand. This complexity ensures that the arrangement is not the result of simple, random processes. DNA replication and repair mechanisms further highlight its complexity. These processes involve multiple proteins and enzymes working in coordination to accurately copy and maintain genetic information. The nucleotide sequence is not just complex but also highly specific, as it encodes precise instructions for synthesizing proteins. Each gene in the DNA sequence corresponds to a particular protein, and even small changes in the sequence can significantly affect the resulting protein's function. DNA also contains regulatory elements that control when and where genes are expressed, adding another layer of specificity to its function.
The specified complexity observed in DNA is unlikely to have arisen through undirected processes such as random mutations and natural selection. Instead, it suggests that an intelligent cause is a more plausible explanation for the origin of such intricate and functionally specific information.
Another example of specified complexity is the bacterial flagellum, a whip-like motorized structure used by certain bacteria for locomotion. Here’s a detailed look at why the bacterial flagellum is considered an example of specified complexity.
The bacterial flagellum is composed of about 40 different proteins that form various components such as the filament, hook, and basal body. The basal body itself functions like a rotary engine, complete with a rotor, stator, drive shaft, and propeller. For the flagellum to work, all these parts must be present and correctly assembled. The absence of any one of these components renders the flagellum non-functional, highlighting its complexity.
Fig. 3.13. Bacterial flagellum
The flagellum’s components must be arranged in a very specific manner for it to function. The proteins must be assembled in a precise sequence, and their shapes must fit together exactly, much like the parts of a well-engineered machine. The flagellum is not only complex but also serves a highly specific function: propelling the bacterium. It operates at remarkable speeds, can change direction, and is energy-efficient, all of which point to a purposeful design.
The specified complexity of the bacterial flagellum cannot be adequately explained by random mutations and natural selection. The likelihood of such a highly integrated and functional system arising by chance is exceedingly low. Moreover, because intermediate forms of the flagellum would likely be non-functional, the traditional evolutionary pathway of gradual, step-by-step improvements seems implausible.
The flagellum also exemplifies irreducible complexity, a subset of specified complexity, as will be detailed in the following section. The argument is that all parts of the flagellum are necessary for its function, and therefore, it could not have evolved through successive, slight modifications, as Darwinian evolution suggests.
ii. Irreducible Complexity
Irreducible complexity is a concept introduced by biochemist Michael Behe, posits that certain biological systems are too complex to have evolved through gradual, step-by-step modifications. These systems, such as the bacterial flagellum or the blood clotting cascade, consist of multiple, interdependent parts that all must be present and functioning for the system to work. The removal of any one part renders the system nonfunctional. Such intricate and interdependent structures indicate the presence of an intelligent designer, as they cannot be explained by natural selection and random mutation alone. This concept challenges conventional evolutionary theory and supports the idea of purposeful design in nature.
One example of irreducible complexity is the visual cycle, a biochemical process in the eye that converts light into electrical signals, enabling vision. This system consists of multiple interdependent parts that must all be present and functioning for the process to work effectively. If any component is missing or non-functional, the entire visual cycle would fail, illustrating the concept of irreducible complexity. The key components of the visual cycle are photoreceptors (rods and cones), rhodopsin, opsins, retinal, signal transduction pathway, and neural processing.
Fig. 3.14. Molecular steps in visual cycle
Photoreceptors are cells in the retina that detect light. Rods are responsible for low-light vision, while cones detect color. Each photoreceptor contains light-sensitive molecules called photopigments, primarily rhodopsin in rods. This photopigment in rods consists of a protein called opsin and a light-sensitive molecule called retinal. Cones contain different opsins that respond to various wavelengths of light, enabling color vision. Retinal, a derivative of vitamin A, changes shape when it absorbs light. This shape change activates opsin, starting the visual transduction cascade. The activated opsin in turn activates a G-protein called transducin. Transducin activates phosphodiesterase (PDE), which lowers the level of cyclic GMP (cGMP) in the cell. The decrease in cGMP closes ion channels in the photoreceptor cell membrane, leading to hyperpolarization of the cell and generating an electrical signal. The electrical signal is transmitted through bipolar cells to ganglion cells, which send the signal via the optic nerve to the brain. The brain processes these signals to form visual images.
Each component of the visual cycle is interdependent. Photoreceptors, rhodopsin, retinal, transducin, PDE, and ion channels must all be present and function correctly for vision to occur. Removing any single component would cause the system to fail. We can argue that such a complex system could not have evolved through a series of small, incremental changes because intermediate stages without all components would be non-functional and thus not favored by natural selection. The intricate biochemical pathways and precise molecular interactions involved in the visual cycle highlight the complexity and specificity required for vision. The interdependent nature of its components and the complexity of the biochemical processes involved suggest that this system could not have arisen through undirected evolutionary processes, but rather points to an intelligent designer, the divine Creator.
The visual cycle in terms of a computer program can help illustrate its complexity and interdependent processes. Here’s a conceptual analogy using python:
Visual cycle written in computer program
# initialization: sets up the environment for the visual cycle including photoreceptors (rods and cones)
class VisualCycle:
def __init__(self):
self.photoreceptors = {'rods': [], 'cones': []}
self.initialize_photopigments()
self.signal_pathway_active = False
# user input: detects incoming light and starts the photopigment activation process
def detect_light(self, light_wavelength):
if light_wavelength in visible_spectrum:
self.activate_photopigment(light_wavelength)
# trigger event: changes the shape of retinal and activates opsin, which then triggers the signal transduction pathway
def activate_photopigment(self, wavelength):
retinal = self.change_retinal_shape(wavelength)
opsin = self.bind_retinal_to_opsin(retinal)
self.start_signal_transduction(opsin)
# event handing: activates transducin and PDE, leading to a reduction in cGMP levels, closing ion channels, and generating an electrical signal
def start_signal_transduction(self, opsin):
self.signal_pathway_active = True
transducin = self.activate_transducin(opsin)
pde = self.activate_pde(transducin)
self.regulate_cGMP_levels(pde)
self.generate_electrical_signal()
# signal handling: adjusts ion channels based on cGMP levels to facilitate the electrical signal generation
def regulate_cGMP_levels(self, pde):
cGMP_level = self.reduce_cGMP(pde)
self.adjust_ion_channels(cGMP_level)
# signal output: creates and transmits the electrical signal to the brain
def generate_electrical_signal(self):
if self.signal_pathway_active:
electrical_signal = self.create_signal()
self.transmit_signal_to_brain(electrical_signal)
# network communication: processes and forwards the signal through bipolar and ganglion cells, ultimately sending it via the optic nerve
def transmit_signal_to_brain(self, signal):
bipolar_cells = self.process_signal_with_bipolar_cells(signal)
ganglion_cells = self.forward_signal_to_ganglion(bipolar_cells)
optic_nerve = self.send_signal_via_optic_nerve(ganglion_cells)
self.visual_perception(optic_nerve)
# final output: the brain decodes and processes the signal to create a visual image
def visual_perception(self, optic_nerve):
visual_cortex = self.decode_signal(optic_nerve)
self.render_image(visual_cortex)
This analogy illustrates the interdependent steps and complexity of the visual cycle, much like a computer program with several functions and event handlers working together to achieve a specific output. If we miss any of the steps or use them in the wrong order, the intended result will not be achieved.
The fact that the visual cycle can be represented as a computer program suggests that the eye was intelligently designed. The blueprint for the eye's design is linked to the PAX6 gene, located on chromosome 11, which plays a crucial role in eye development.
iii. Notable Books about Intelligent Design
Evolution: A Theory in Crisis (Michael Denton: 1985): Denton critiques Darwinian evolution, arguing that the complexity of biological systems cannot be adequately explained by natural selection alone. Denton presents evidence from various fields, such as molecular biology and paleontology, to highlight gaps and inconsistencies in evolutionary theory. He contends that the intricate structures and functions observed in living organisms point to intelligent design rather than random mutations and selection. The book challenges the prevailing scientific consensus and suggests that an alternative explanation is needed to account for the origin and diversity of life.
Darwin's Black Box: The Biochemical Challenge to Evolution (Michael J. Behe: 2006): In this seminal book, Michael Behe introduces the concept of irreducible complexity, arguing that certain biological systems, such as the bacterial flagellum, are too complex to have evolved through natural selection alone. Behe contends that these systems are best explained by intelligent design. The book challenges the adequacy of Darwinian evolution in explaining the intricate machinery of life at the molecular level and has sparked significant debate in both scientific and philosophical circles.
Darwin on Trial (Phillip Johnson: 2010): This book critiques the scientific foundations of Darwinian evolution. Johnson, a law professor, examines the evidence for evolution with the scrutiny of a legal analyst. He argues that natural selection and random mutation do not adequately explain the complexity of life. Johnson suggests that much of the support for Darwinism is based on philosophical naturalism rather than empirical science. He challenges the scientific community's reluctance to consider alternative explanations, such as intelligent design, and calls for a more open discussion on the origins of life. The book is influential in promoting intelligent design and questioning the dominance of Darwinian theory in biology.
Signature in the Cell: DNA and the Evidence for Intelligent Design (Stephen C. Meyer, 2010): This book explores the origins of life and the information encoded in DNA. Meyer argues that the complex and specified information within DNA is best explained by an intelligent cause, as naturalistic processes fail to account for the origin of such information. He presents a detailed case for intelligent design based on the intricacies of genetic information, suggesting that life’s origin points to purposeful creation rather than random processes.
Darwin Devolves: The New Science About DNA That Challenges Evolution (Michael J. Behe, 2020): Behe’s another book argues that recent genetic discoveries undermine traditional Darwinian evolution. He asserts that while natural selection and random mutations can explain minor adaptations, they fail to account for the complexity of molecular machinery within cells. He introduces the concept of "devolution," where mutations lead to the loss of genetic information rather than the creation of new, beneficial traits. Behe contends that these genetic limitations point to the necessity of an intelligent designer, challenging the traditional evolutionary framework and proposing that intelligent design offers a more plausible explanation for the complexity of life.
The Mystery of Life's Origin: Reassessing Current Theories (Charles B. Thaxton et al., 2020): This groundbreaking work critiques the various naturalistic theories of life’s origin and proposes intelligent design as a more plausible explanation. They argue that prebiotic chemistry and the formation of life from non-life are better explained by an intelligent cause. The book discusses the shortcomings of contemporary origin-of-life theories and introduces intelligent design as a scientifically viable alternative, laying the foundation for the modern intelligent design movement.
The Design Inference: Eliminating Chance through Small Probabilities (William A. Dembski & Winston Ewert, 2023): This book lays the theoretical groundwork for detecting design in nature. They explore the mathematical framework for detecting intelligent design. The authors present the argument that complex systems exhibiting specified complexity are best explained by an intelligent cause rather than random processes. They introduce the concept of "specified complexity," which combines complexity with an independently given pattern. The book uses probability theory to show that certain patterns in nature are too improbable to have arisen by chance. Through rigorous analysis, Dembski and Ewert argue that recognizing design is a legitimate scientific practice and provides tools for distinguishing design from chance in biological systems.
f. Particle Physics and Creation
In the previous section, we explored the origin of life by discussing its fundamental building blocks, including amino acids, RNA, proteins, DNA, and cells. These components are made up of atoms, which we implicitly assume to exist naturally. Atoms are composed of elementary particles. In this section, we will take a closer look at the origin of these particles, exploring whether they emerged spontaneously or were formed through a purposeful process.
According to the Standard Model of particle physics, all matters in the universe are composed of 17 elementary particles. These include 6 quarks, 6 leptons, 4 gauge bosons (gluons, photons, Z bosons, and W bosons), and the Higgs boson. Each of these particles has specific properties, such as mass, charge, and spin, and each plays a unique role in particle interactions, similar to how organelles in a cell perform distinct functions.
Fig. 3.15. The elementary particles of the Standard Model
Quarks are fundamental components of matter, essential in forming protons and neutrons. Protons consist of two up quarks and one down quark, while neutrons are made of one up quark and two down quarks. Quarks are held together by strong force, mediated by gluons. Unlike gravitational or electromagnetic forces, which diminish with distance, the strong force between quarks increases as they move apart and decreases as they get closer, maintaining a specific separation. Quarks can change types during particle interactions, such as beta decay, where a neutron transforms into a proton by converting a down quark to an up quark.
Gauge bosons are fundamental particles that mediate the basic forces of nature. These include the photon for the electromagnetic force, the W and Z bosons for the weak force, and the gluon for the strong force. Each gauge boson is associated with a specific field and carries the force between particles. They are essential for explaining interactions at the quantum level, governing how particles interact and bind together to form matter.
The Higgs mechanism is a process that explains how elementary particles acquire mass. It involves the Higgs field, an energy field that permeates the universe. When particles interact with the Higgs field, they acquire mass, similar to how objects moving through a medium experience resistance. The Higgs boson, a particle associated with the Higgs field, was discovered in 2012, confirming this theory. Without the Higgs mechanism, particles would remain massless, and the universe would lack the structure necessary for the formation of atoms, living organisms, planets, and stars.
Particle physics operates at an incredibly advanced and intricate level, offering profound insights into the nature and origins of the universe. This prompts us to ask the following fundamental questions, among many others:
- How were the 17 fundamental particles created with such precise properties?
- How did the gauge bosons acquire the property of force mediation?
- How did the Higgs mechanism originate?
- How did the beta decay mechanism originate?
- How can the properties of elementary particles be mathematically described?
If the answers to the above questions were purely the result of random processes, the world as we know it might not exist. For instance, if even one fundamental particle were missing, if the Higgs mechanism had not been established, or if the mass and spin values of elementary particles were slightly different, neutrons, protons, and electrons would not be able to hold together. This would result in the collapse of all matters, making the formation of anything—including human beings—impossible. Such fine-tuned precision in the fundamental structure of the universe exemplifies the concept of "irreducible complexity" within the realm of particle physics, a principle often associated with intelligent design.
The creation of elementary particles to form matter can be compared to the formation of cells and organelles in multicellular organisms. Just as specific cells and organelles each have distinct roles and properties that contribute to the complex functionality of living beings, elementary particles possess precise characteristics that enable the formation of atoms, molecules, and ultimately, all matters. This parallel underscores the sophistication and intentionality inherent in the natural world—whether at the microscopic level of living cells, the subatomic realm of fundamental particles, or the macroscopic scale of living organisms, stars, and galaxies.
The fact that the formation of elementary particles and their interactions can be precisely described using the mathematical equations of quantum mechanics suggests that they are the result of an intentional mathematical design rather than mere chance. Otherwise, we would have to assume that elementary particles possess intelligence and the ability to determine, on their own, the exact values of mass, charge, and spin required to form matter and interact with other particles. However, we know this is not the case, as elementary particles do not have consciousness or an intrinsic understanding of quantum mechanics.
The intricate design and coordination observed in both biological systems and particle physics strongly suggest the presence of underlying intelligence and purposeful creation—a hallmark of intelligent design—rather than a series of random occurrences.
g. Aliens and Creation
The possibility of aliens, or extraterrestrial life, has fascinated scientists and the public alike for decades. Given the vastness of the universe, with billions of galaxies each containing billions of stars and potentially even more planets, it seems statistically plausible that life could exist elsewhere if life arisen spontaneously. The number of extraterrestrial civilizations in a galaxy can be estimated by Drake Equation: N = R* × fp × ne × fl × fi × fc × L where, N is the number of advanced civilizations, R* is star formation rate, fp is fraction of having planets, ne is the number of planets supporting life, fl is the fraction of planets where life develops, fi is the fraction of planets where intelligent life evolves, fc is the fraction of civilizations that can send signals, and L is the length of time civilizations can communicate. With an appropriate value for each parameter, the estimated number of civilizations in a galaxy is about 2.
Fig. 3.16. Do aliens exist?
The projects for search for extraterrestrial intelligence (SETI) were started in 1960. These projects utilize various methods and technologies to scan the cosmos for evidence of alien civilizations. Here are some key SETI projects.
Project Ozma was the first modern SETI experiment. It used a radio telescope to scan the stars Tau Ceti and Epsilon Eridani for potential extraterrestrial signals. SETI@home was a distributed computing project that utilized the idle processing power of home computers. Volunteers installed software on their personal computers to analyze radio signals for signs of extraterrestrial intelligence. The Allen Telescope Array is a dedicated network of radio telescopes designed for a continuous and systematic search for extraterrestrial signals. It consists of multiple small dishes working together to survey large areas of the sky. Breakthrough Listen is the most comprehensive SETI project to date, aimed at surveying one million of the closest stars and 100 nearby galaxies for potential signals. The Fast Radio Burst project investigates mysterious fast radio bursts detected from space, which could provide insights into unknown cosmic phenomena. Laser SETI is a project focused on detecting optical signals from extraterrestrial civilizations, exploring the possibility of interstellar communication through laser transmissions.
Despite continuing searches using advanced radio and optical telescopes, the SETI projects failed to find definitive evidence of intelligent extraterrestrial life.
Fig. 3.17. Radio telescopes used for SETI
If numerous extraterrestrial civilizations exist, they could have visited or might be visiting us now. In such a case, what kind of space travel methods would they use? Traveling to space using flying objects (rockets or UFOs) faces insurmountable challenges due to the enormous size of the universe. Even the nearest star, Proxima Centauri, is 4.24 light-years away, requiring tens of thousands of years to reach with current technology. The vast distances involved render it impossible to explore even our galaxy, let alone the universe, within human lifespans.
The possible advanced propulsion methods could include warp drives or travel through wormholes. The warp drive is a theoretical concept for faster-than-light space travel, inspired by Einstein's general relativity. Proposed by physicist Miguel Alcubierre in 1994, the warp drive involves creating a "warp bubble" that contracts space in front of a spacecraft and expands space behind it. This would allow the spacecraft to move faster than light relative to external observers without violating the laws of physics. The key challenge is that it requires exotic matter with negative energy density, which has not been discovered or created. While promising in theory, significant scientific and technological advancements are needed to make a warp drive feasible for practical use in space exploration.
Fig. 3.18. Wormhole
Space travel through wormholes is a theoretical concept involving shortcuts through space-time that connect distant points in the universe. Predicted by Einstein’s general relativity, wormholes, or Einstein-Rosen bridges could potentially allow instantaneous travel across vast cosmic distances. For practical use, a traversable wormhole would need to be stabilized, theoretically requiring exotic matter with negative energy density to prevent collapse. Despite being a popular science fiction trope, wormholes remain speculative with no experimental evidence. If feasible, they could revolutionize space travel, enabling exploration of distant galaxies and reducing travel time from years to mere moments. However, significant scientific and technological breakthroughs are required to make this concept a reality.
Fig. 3.19. Teleportation
Teleportation through hyperspace or the bulk could be another method to achieve instantaneous travel across vast distances by bypassing the conventional three-dimensional space. Hyperspace refers to an additional dimension or series of dimensions beyond the familiar three spatial dimensions and one temporal dimension, providing a shortcut through the fabric of the universe. Similarly, the bulk is a term used in theories such as brane cosmology within string theory, where our universe is envisioned as a "brane" within a higher-dimensional space called the bulk. In these theories, teleportation involves moving through these higher dimensions to reappear instantaneously in another location within our universe. Theoretical frameworks like the Randall-Sundrum model propose the existence of such higher dimensions that could allow for shortcuts through space-time. If such dimensions exist and could be accessed, it might be possible to exploit them for teleportation, avoiding the constraints of relativistic travel and potentially making faster-than-light travel feasible.
If life arises spontaneously as the Drake equation assumed, the total number of extraterrestrial civilizations in the universe would be about 400 billion (2 civilizations in each of 200 billion galaxies). Life on Earth began approximately 4 billion years ago. Now, imagine that 1% of extraterrestrial civilizations started 1 million years earlier than ours and followed a similar evolutionary path. In that case, their civilization would be 1 million years more advanced than ours. With such a significant head start, they might have developed advanced technologies for teleportation, enabling them to travel anywhere in the universe as easily as we visit our neighbors. If the population of one such civilization is 1 billion, the total number of aliens would be one quintillion (1018). If only 1% of them could visit Earth for just one day every 10 years, Earth would be crowded with about 10 trillion aliens each day—1,000 times the current human population. However, we have not observed any evidence of their presence. How can we explain this apparent contradiction?
This problem is known as the Fermi Paradox, named after Enrico Fermi, who famously asked, "Where is everybody?" The answers could be: (i) the assumption (evolution) in the Drake Equation is wrong, or (ii) advanced civilizations might use technology that is undetectable with our current methods or deliberately avoid detection. If extraterrestrials were neither bacteria nor invisible beings, their existence would likely have been revealed to us in some way by now. However, the fact that we have not yet detected any evidence of their existence suggests that the evolutionary assumption in the Drake Equation is most likely incorrect.
h. Instincts in Living Organisms and Creation
Computers are composed of three main components: hardware, software, and firmware. Firmware is specialized software programmed into the ROM or UEFI, providing critical control for specific hardware and acting as the intermediary between hardware and software. It is crucial for system boot-up, managing hardware operations, and ensuring device functionality.
Firmware in computers and instinct in living organisms share a key similarity: both are intrinsic, pre-programmed systems that govern essential functions. Firmware initializes and manages operations, ensuring proper function from power-on. Similarly, instinct is a natural, innate behavior pattern that directs survival activities, such as feeding, mating, and fleeing from danger. Both systems operate automatically without conscious input, providing foundational guidance for effective functioning and environmental response. In essence, firmware is to computers what instinct is to living organisms—an embedded, pre-configured system essential for basic operation and survival. Just as firmware is embedded in ROM by computer designers, instinct is embedded in the brains and nervous systems of living organisms by the divine Creator. Let me show some examples of instincts that illustrate this concept.
i. Nest Building of Mason Bees
In Jean-Henri Fabre's book "The Mason Bees" (part of "Book of Insects"), he describes the intricate nest-building process of mason bees. These bees select a suitable flat surface, often a stone, to start their construction. They gather mud and small pebbles, meticulously creating cells for their offspring. The female bee carries mud pellets to the site, shaping and compacting them into a secure cell wall. She then collects nectar and pollen to provision each cell, laying a single egg before sealing it with more mud. This process is repeated, resulting in a series of neatly arranged, pebble-reinforced mud cells that protect the developing larvae. Fabre’s observations highlight the remarkable precision and diligence of these solitary bees.
He describes an experiment where he swapped an unfinished nest with a completed one. The mason bee, upon returning to find her unfinished nest replaced with a completed one, exhibited an interesting behavior. Instead of resuming work on the new nest, the bee continued her construction as if no change had occurred. She did not recognize the finished nest as her own work and persisted in her habitual actions, bringing mud and continuing to build.
Fig. 3.20. Mason bee builds nest on top of the completed one
This experiment illustrates the instinctual and programmed nature of the bee's behavior, driven by an internal sequence of actions rather than visual cues of the nest’s state.
Fabre did the opposite experiment by swapping a completed mason bee nest with an unfinished one. He observed that when the mason bee returned to the site and found the completed nest replaced with an unfinished one, she did not continue working on the new, incomplete nest. Instead, the bee seemed confused and spent time inspecting the altered nest, but ultimately did not resume construction. She then moves to the next action of filling it with honey, even if it is overflowing. This behavior demonstrates the mason bee's strong attachment to her specific nest and the difficulty in adapting to unexpected changes in her environment. This experiment also highlights the instinctual nature of the mason bee’s nest-building process.
Fig. 3.21. Mason bee fills honey to unfinished nest
Fabre did another interesting experiment. The mason bee fills her nest with nectar first and then turns 180 degrees and dusts off pollen from her legs and body. If she is disrupted when about to dust off pollen, she flies away and waits for the threat to pass. After returning to the nest, she starts over again from the first. Fill her nest with nectar even if there’s nothing in her nectar sag. This experiment shows that bees instinctively follow a built-in nectar-gathering program, and their sequence of actions cannot be altered.
Fig. 3.22. Behavior of the mason bee when disrupted
When the mason bee finishes her nest building, she fills it with nectar and pollen, lays her egg on it, and then seals the top of the nest. The sealed top is as hard as cement, Fabre conducted another experiment: for one nest, he pasted paper on the top, and for another, he placed a paper cone on top. He observed the behavior of the hatched mason bees. For the nest with pasted paper, the bee used her strong jaws to cut through the top without any problem. For the nest with a paper cone, she cut through the top but did not know what to do next. Expecting to see the open sky, she became disoriented by the paper cone, did not attempt to pierce it, and eventually died.
Fig. 3.23. Bee nest pasted with paper and covered with paper cone
All of the above experiments demonstrate the instinctive and programmed nature of the mason bee's behavior, driven by an internal sequence of actions embedded in her genetic code.
ii. Nest Building of Weaverbirds
The Weaverbird, known for its intricate and elaborate nests, skillfully weaves blades of grass and other plant materials into complex structures, showcasing remarkable craftsmanship and instinctual engineering.
Fig. 3.24. Nest of weaverbird
Eugène Marais, a South African naturalist and poet, conducted fascinating experiments on weaverbirds to study their nest-building behavior and the role of instinct. Marais aimed to understand whether the intricate nest-building skills of weaverbirds were purely instinctual or if they involved learned behavior.
Marais raised weaverbirds in isolation from their natural environment to ensure they had no exposure to other birds or nest-building activities. He observed these isolated birds from hatching to maturity, ensuring they had no opportunity to learn from other weaverbirds for four generations. For the fifth generation, Marais provided the same materials that wild weaverbirds use for nest building, such as grass and twigs. Despite never having seen a nest or other birds building one, the isolated weaverbirds began to build nests that were almost identical to those constructed by their wild counterparts. They demonstrated the same intricate weaving techniques, knotting methods, and overall structure. The nests built by these isolated birds showed consistent design features typical of their species, indicating that their nest-building skills were innate rather than learned through observation or mimicry.
Marais concluded that the complex nest-building behavior of weaverbirds is driven by instinct. This innate behavior is encoded in their brain and nervous system, allowing them to construct elaborate nests without prior experience or learning. These innate behaviors are purposefully designed and passed down through generations via DNA.
iii. Formation of the Nautilus Shell
The nautilus is a marine mollusk known for its beautiful and distinctive shell. The shape of its shell follows a precise logarithmic spiral. The formation of the nautilus shell is yet another remarkable example of instinct, involving a complex interplay of biological and chemical processes that are intricately coordinated to produce its unique structure.
The process begins when the nautilus is still an embryo inside an egg. The initial shell, called the protoconch, forms during this stage. This first chamber is small and provides the foundation for subsequent shell growth. The mantle, a specialized tissue that lines the shell, secretes layers of calcium carbonate (CaCO3) in the form of aragonite, a crystalline structure. The mantle cells extract calcium ions from seawater and combine them with carbonate ions to form calcium carbonate. The mantle also secretes an organic matrix composed of proteins and polysaccharides, which serves as a scaffold for calcium carbonate deposition. This matrix helps control the shape and orientation of the aragonite crystals, ensuring the shell's strength and durability.
Fig. 3.25. Nautilus shell showing logarithmic spiral pattern
As the nautilus grows, it periodically adds new chambers to its shell. Each new chamber is larger than the previous one, accommodating the increasing size of the nautilus. The nautilus moves forward in the shell and seals off the older chambers with a wall called a septum, creating a series of progressively larger, interconnected chambers. A specialized organ called the siphuncle runs through all the chambers of the shell. This tube-like structure adjusts the gas and liquid content within the chambers. By regulating the gas (mostly nitrogen) and liquid levels, the siphuncle helps the nautilus control its buoyancy, allowing it to move up and down in the water column. The outermost layer of the shell, known as the periostracum, is an organic layer that protects the underlying calcium carbonate layers from dissolution and physical damage. Beneath the periostracum are layers of aragonite, arranged in a nacreous or prismatic structure, contributing to the shell's iridescence and strength.
The intricate coordination required for the secretion of calcium carbonate, the formation of chambers, and the regulation of buoyancy through the siphuncle indicates an all-or-nothing system that is too complex to have arisen through gradual evolution. The absence of clear transitional fossils in the record, coupled with the nautilus being labeled a "living fossil," implies a sudden appearance and suggests that its sophisticated shell formation points toward purposeful creation rather than undirected evolution. The nautilus does not possess mathematical or biochemical knowledge; therefore, the precise formation of its logarithmic shell shape, the complex biochemical regulation of shell secretion, and the seamless integration of its buoyancy system are not the results of random processes. Instead, these features suggest a pre-programmed genetic blueprint that enables the nautilus to construct its intricate shell with remarkable precision, reinforcing the idea of purposeful design rather than unguided evolution.
i. Mathematics in Nature and Creation
"Mathematics is the language in which God has written the universe.” - Galileo Galilei
Mathematical patterns and principles are abundantly found in nature, including the golden ratio, golden angle, Fibonacci sequence, logarithmic spiral, and fractals.
- Golden ratio, often denoted by the Greek letter φ (=(a+b)/a=a/b), is an irrational number approximately equal to 1.618. It occurs when the ratio of two quantities is the same as the ratio of their sum to the larger of the two quantities.
- Golden angle is the angle subtended by two radii that divide a circle into two arc lengths in the golden ratio. It is the smaller of the two angles (~137.5 degrees) created when dividing the circumference of a circle according to the golden ratio.
- Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 or 1 (e.g., 0, 1, 1, 2, 3, 5, 8, ...).
- Logarithmic spiral is a self-similar spiral curve that appears frequently in nature. It is characterized by the property that the angle between the tangent and radial line at any point is constant.
- Fractals are complex patterns that are self-similar across different scales. They are often created by repeating a simple process over and over in an ongoing feedback loop
Fig. 3.26. Golden ratio, golden angle, logarithmic spiral, and fractal
Let's explore where these mathematical principles are found in nature.
Phyllotaxis is the arrangement of leaves, flowers, or other botanical structures on a plant stem. It is a key concept in botany and reflects the way plants maximize their exposure to sunlight and other environmental resources. The arrangement of leaves follows the Fibonacci sequence, where the number of leaves in successive spirals is a Fibonacci number. The possible phyllotaxis patterns are 1/2, 1/3, 2/5, 3/8, 5/13, 8/21, etc., where the numerators and denominators form Fibonacci sequence.
The 3/8 phyllotaxis refers to a pattern of leaf arrangement where each leaf is separated from the next by three-eighths of a full 360-degree rotation around the stem. This means that each successive leaf is positioned at an angle of 3/8×360=135 degrees (called divergence angle) from the previous one. The divergence angle converges to the golden angle of 137.5 degrees in plants with a large number of leaves. This fractional divergence helps distribute the leaves in a way that maximizes exposure to sunlight and minimizes overlap and shade, ensuring that each leaf receives adequate light and air. Proper spacing allows for optimal distribution of water and nutrients throughout the plant.
Fig. 3.27. 2/5 phyllotaxis (a) and 3/8 phyllotaxis (b)
Similar patterns can also be found in many flowers. For example, the number of leaves, branches, and petals in sneezewort form consecutive Fibonacci numbers. 1, 1, 2, 3, 5, 8 for leaves, 1, 2, 3, 5, 8, 13 for branches, and 5, 8 or 8, 13 for petals.
Fig. 3.28. Leaves and branches of sneezewort
Not only the leaves, but also the shoots, fruits, and seeds of a plant are governed by the Fibonacci sequence and golden angle.
The sprouting pattern of the Norway spruce follows the principles of the Fibonacci sequence and the golden angle. Each new shoot emerges at an angle of approximately 137.5 degrees (golden angle) from the previous one.
Fig. 3.29. Sprouting pattern of Norway spruce
As a result, the branches form in a spiral pattern around the trunk, aligning with Fibonacci numbers in their distribution. This natural pattern enhances the tree's ability to efficiently gather sunlight, water, and nutrients, supporting its growth and health. The daisy exhibits the Fibonacci pattern and golden angle in its floral arrangement. The flower's petals and seeds align in spirals that follow the Fibonacci sequence, where the number of spirals in each direction typically corresponds to successive Fibonacci numbers, such as 21 and 34. Additionally, the divergence angle between successive petals or seeds is approximately golden angle. If the spiral is wound at a golden angle, it forms a logarithmic spiral. If the florets of a daisy form a logarithmic spiral, they maintain their shape as they grow. A logarithmic spiral is self-similar, meaning that the shape of the spiral remains consistent even as it expands. The inherent properties of the logarithmic spiral allow the daisy to maintain its overall geometric structure throughout its growth.
Similar patterns are found in pinecones, cauliflower, and Romanesco broccoli. The scales of a pinecone are intricately arranged in spirals that follow Fibonacci numbers, generally displaying 8 spirals in one direction and 13 in the opposite direction, with each scale carefully positioned at approximately the golden angle. Similarly, the florets of cauliflower are wound in 5 spirals in one direction and 8 in the other, reflecting the same numerical sequence. In Romanesco broccoli, the florets are arranged in 13 spirals in one direction and 21 in the other direction.
The Fibonacci numbers in pineapples can be found in the arrangement of their eyes. These eyes are organized into spirals that follow Fibonacci numbers, typically forming three distinct sets of spirals. Commonly, you can find 8 spirals ascending in one direction, 13 in the opposite direction, and sometimes 21 in another, each set aligning with consecutive Fibonacci numbers. This pattern ensures efficient packing and maximizes the fruit's structural integrity. The arrangement allows the pineapple to grow uniformly and distribute nutrients evenly, showcasing the natural application of Fibonacci sequences in plant growth and development.
Fig. 3.30. Fibonacci sequence and logarithmic spiral found in plants
The growth curve that follows a logarithmic spiral can be found not only in plants but also in humans and other animals. Examples include the human pinna, the cochlea in the ear, human fingers, the tail of a seahorse, the horns of a mountain goat, and the shells of various snails, including the nautilus. If these growth patterns did not follow a logarithmic spiral, they would be unable to sustain their characteristic shape as they continue to grow, ultimately losing their distinct functionality and unique structural integrity.
For example, if the cochlea's growth pattern did not follow a logarithmic spiral, it would significantly affect its ability to process sound efficiently. The logarithmic spiral allows for a gradient of frequencies to be detected along its length, with high frequencies at the base and low frequencies at the apex. Deviations from this pattern could result in uneven spacing of frequency detection areas, leading to impaired hearing or difficulty distinguishing between different sound frequencies. This precise arrangement is essential for the cochlea's role in converting sound waves into neural signals, enabling accurate auditory perception.
Fig. 3.31. Cochlea, ear, seahorse, and hand-knuckle bone
Many fractal patterns can be found in nature, including the branching patterns of ferns and trees, structure of fern leaves, the arrangement of florets in cauliflower, broccoli, and Romanesco broccoli, the root systems of many plants, and pinecones. Fractal patterns are also present in biological systems.
The branching of blood vessels, from major arteries down to the smallest capillaries, follows fractal patterns. The fractal structure maximizes the surface area for nutrient and gas exchange while minimizing the energy required to pump blood throughout the body. The fractal branching ensures that every cell is sufficiently supplied with oxygen and nutrients. Furthermore, the fractal nature of blood vessels contributes to their robustness and adaptability. The repeating patterns can easily adapt to growth and repair, maintaining efficient circulation despite changes or damage.
The human respiratory systems have fractal patterns too. The structure of the lung comprises the trachea branching into bronchi, which further divides into smaller bronchioles, culminating in alveoli where gas exchange occurs. Each division maintains fractal patterns. This fractal architecture maximizes the surface area, which is as big as the size of tennis court, for gas exchange while minimizing the volume occupied by the lungs. By following a fractal pattern, the lungs can efficiently deliver oxygen to the bloodstream and expel carbon dioxide, optimizing respiratory function.
Fig. 3.32. Fractals found in fern and Romanesque broccoli
The presence of mathematical patterns like the golden angle, Fibonacci sequence, and fractals in nature and biological systems challenges the idea of random mutations and natural selection. The golden angle's optimal spacing for leaves and the Fibonacci sequence's efficiency in seed arrangement, for instance, suggest a purposeful design to maximize resource utilization. Fractals' self-similar complexity in structures like blood vessels and plant roots indicates a sophisticated level of organization that cannot be achieved by random processes. The complexity, precision, and universal presence of these structures point to a predetermined intelligent design rather than an undirected evolutionary process.
4. Invitation to the Gospel
“When I consider your heavens, the work of your fingers, the moon and the stars, which you have set in place,
what is mankind that you are mindful of them, human beings that you care for them?
You have made them a little lower than the angels and crowned them with glory and honor.
You made them rulers over the works of your hands; you put everything under their feet:
all flocks and herds, and the animals of the wild,
the birds in the sky, and the fish in the sea, all that swim the paths of the seas.
LORD, our Lord, how majestic is your name in all the earth!” (Psalm 8:3-9)
The above Bible verses beautifully reflect the awe and wonder of creation, acknowledging the majesty of the heavens and the intricate design of the universe as evidence of the Creator. In these verses, the psalmist marvels at the moon, stars, and the vast expanse of the sky, which God has set in place, recognizing the deliberate and purposeful act of creation. Creationism draws upon this sense of wonder, asserting that the complexity and order seen in nature are not products of random chance but of intentional design by the divine Creator. The psalmist’s reflection on the smallness of humanity in comparison to the grandeur of the cosmos highlights the belief that, despite the vastness of the universe, God has chosen to crown us with glory and honor, giving us dominion over the works of His hands. This profound relationship between God and humanity points to His deep love for us and His desire for us to live in fellowship with Him.
In this chapter, I’d like to introduce the gospel, which reveals how God’s love and desire for fellowship with us are fulfilled through Jesus Christ, offering us the opportunity to be reconciled with Him and to live in the fullness of His grace. For those who still struggle to believe in the existence of God as revealed through the universe and all creation, I would also like to present Pascal's Wager.
Blaise Pascal was a 17th-century French philosopher, mathematician, physicist, and writer renowned for his philosophical reflections on human nature and faith, particularly in his work "Pensées." He presented a philosophical argument about the existence of God called Pascal’s Wager. Pascal argues that it is a rational decision to live as though God exists because if God does exist, the believer gains eternal happiness, while if God does not exist, the loss is negligible. Conversely, if one lives as if God does not exist and is wrong, the potential loss is immense, including eternal suffering, while the gain if correct is minimal. Hence, Pascal concludes that believing in God is the safer and more beneficial "wager."
| God exists | God does not exist |
Believe in God | Eternal joy (heaven) | Nothing happens |
Do not believe in God | Eternal suffering (hell) | Nothing happens |
Table 4.1. Pascal’s Wager
So far, we have had an extensive discussion about creation and evolution, acknowledging the existence of God. If you recognize this truth, then Pascal’s Wager presents two clear choices: eternal joy (heaven) or eternal suffering (hell). Everyone desires to choose the first option, and no one wants to choose the second. At this stage, you may doubt the existence of heaven, but heaven truly exists. In 2 Corinthians, the Apostle Paul shares a profound and mysterious experience that provides a glimpse into the existence of heaven. He writes:
"I know a man in Christ who fourteen years ago was caught up to the third heaven. Whether it was in the body or out of the body I do not know—God knows. And I know that this man—whether in the body or apart from the body I do not know, but God knows—was caught up to paradise and heard inexpressible things, things that no one is permitted to tell." (2 Corinthians 12:2-4)
Paul's account suggests that heaven, or the "third heaven," is a realm of indescribable beauty and divine presence, distinct from our Earthly experience. This "third heaven" is considered the highest part of heaven, a place of ultimate spiritual reality and communion with God. The "inexpressible things" Paul heard there indicate that the experiences and truths of heaven are beyond human comprehension and language.
This passage reassures believers of heaven's reality and its profound, transcendent nature, offering hope and a promise of the divine mysteries that await beyond our Earthly existence. Paul's vision serves as a powerful testament to the existence of a heavenly paradise, a place prepared by God for those who love Him.
Heaven is open to anyone who believes in Jesus Christ. Jesus Christ came to Earth to save humanity from sin. Jesus is a historic figure. Our history is divided by B.C. (Before Christ) and A.D. (Anno Domini, which is Latin for "in the year of our LORD"). As written in the four Gospel books, Jesus performed numerous miracles during His ministry, demonstrating His divine power and compassion. He healed the sick, such as curing a leper (Matthew 8:1-4) and restoring sight to the blind (John 9:1-7). He also performed nature miracles, including calming a storm (Mark 4:35-41) and walking on water (Matthew 14:22-33). Additionally, Jesus raised the dead, most notably Lazarus (John 11:1-44), and multiplied loaves and fishes to feed thousands (Matthew 14:13-21). These miracles affirmed His identity as the Son of God and brought hope and faith to many.
If you want to believe in Jesus and seek assurance of going to heaven, you can follow these steps based on the core principles of the Christian faith:
Recognize that you are a sinner in need of God's forgiveness. Sin includes blasphemy, pride, greed, lust, wrath, idolatry, adultery, theft, lying, deceit, hatred, gambling, drunkenness, and drug abuse, and more—no one is exempt from it. This sin has broken our fellowship with God, creating a divide between us and Him. The Bible says,
"For all have sinned and fall short of the glory of God," (Romans 3:23).
Have faith that Jesus Christ is the Son of God who died for your sins and rose again.
"For God so loved the world that he gave his one and only Son, that whoever believes in him shall not perish but have eternal life." (John 3:16)
Confess your sins to God and turn away from them.
"If we confess our sins, he is faithful and just and will forgive us our sins and purify us from all unrighteousness" (1 John 1:9).
Invite Jesus into your life to be your Savior and LORD. This means trusting Him for your salvation and committing yourself to follow Him.
"Yet to all who did receive him, to those who believed in his name, he gave the right to become children of God" (John 1:12).
Here is a simple prayer you can say to express your faith and commitment to Jesus:
“I come before You, acknowledging my sins and need for Your grace. I believe Jesus died for my sins and rose again to give me new life. I accept Him as my LORD and Savior, surrendering my heart and life to You. Please forgive me, cleanse me, and guide me by Your Spirit. Help me to live faithfully, walking in Your love and purpose. Thank You for Your mercy and salvation. In Jesus’ name, Amen.”
After accepting Jesus, it's important to grow in your new faith. Read the Bible regularly, pray, and find a local church where you can be part of a community of believers who will support and encourage you.
Show your faith through your actions by loving others, sharing your faith, and living according to the teachings of Jesus.
"By this everyone will know that you are my disciples, if you love one another" (John 13:35).
Believing in Jesus and committing your life to Him is the foundation of Christian faith and the path to eternal life in heaven.
“Believe in the Lord Jesus, and you will be saved—you and your household!" (Acts 16:31)
Acknowledgements
I would like to express my sincere gratitude to Rev. Hwan-Chull Park of the Bridge Church, who carefully read through the entire draft and made meticulous revisions and necessary additions.
I am also deeply thankful to Rev. Yong-Cheol Kim, Rev. Jong-Kug Kim, Missionary Kyoung Kim, and Mrs. Hyun-Ah Kim for inspiring the publication of this book through many conversations about the Bible and astronomy.
Additionally, I extend my heartfelt thanks to Dr. and Rev. Jun-Sub Im of BLOO-gene Korean Church in Charlottesville, Dr. Kyoung-Joo Choi of Arcturus Therapeutics, and Dr. Chi-Hoon Park of Korea Research Institute of Chemical Technology for reading the manuscript and providing valuable feedback.
Special thanks go to my sons, Samuel and Daniel, for their assistance with image work.
In the late 19th and early 20th centuries, approximately 150 to 200 American missionaries arrived in Korea, laying the foundation for Christian evangelism, education, and medical missions. Their efforts played a pivotal role in spreading the gospel throughout the country and ultimately impacted my life as well. By the grace of Jesus, I received salvation and became a member of the family of faith. I would like to take this opportunity to express my heartfelt gratitude for their dedication and service.
All glory to God!
Image Credit
1. The Creation of the Universe
Fig. 1.1: NASA/JPL, Fig. 1.2: Hubble Heritage Team, Fig. 1.3: R. Hurt/JPL-Caltech/NASA, Fig. 1.4: Hubble/NASA/ESA, Fig. 1.5: Wikipedia/R. Powell, Fig. 1.6: Wikimedia/D. Leinweber, Fig. 1.7: NASA/CXC/M. Weiss(left), NASA/D. Berry (right), Fig. 1.8: Stellarium, Fig. 1.9: Physics Forums, Fig. 1.10: NASA/JPL-Caltech (left), A. Sarangi, 2018, SSR, 214, 63 (right), Fig. 1.11: Wikimedia/ALMA (ESO/NAOJ/NRAO) (left), T. Müller (HdA/MPIA)/G. Perotti (The MINDS collaboration)/M. Benisty (right), Fig. 1.12: TASA Graphic Arts, Inc., Fig. 1.14: Jon Therkildsen, Fig. 1.15: www.neot-kedumim.org.il
2. God's Masterpiece, the Earth
Fig. 2.1: R. Narasimha, Fig. 2.3: NASA, Fig. 2.4: NASA/Goddard/Aaron Kaase, Fig. 2.6: Wikimedia, Fig. 2.7: Linda Martel, Fig. 2.8: Wikimedia, Fig. 2.9: NASA/ESA/H. Weaver & E. Smith (left), NASA/HST Comet Team (right), Fig. 2.10: Wikimedia/M. Bitton, Fig. 2.11: Wikimedia/John Garrett, Fig. 2.12: UK Foreign and Commonwealth Office, Fig. 2.13: Wikipedia, Fig. 2.16: Wikipedia/G. Taylor, Fig. 2.17: NASA/Caltech
3. Creation or Evolution?
Fig. 3.1: Wikipedia/Yassine Mrabet, Fig. 3.2: OpenEd/Christine Miller , Fig. 3.3: Wikipedia/LadyofHats, Fig. 3.4: Wikipedia/Messer Woland & Szczepan (left), Wikipedia/LadyofHats (right), Fig. 3.5: J.E. Duncan & S.B. Goldstein, Fig. 3.6: Wikipedia/Fiona 126, Fig. 3.7: NASA, Fig. 3.8: R. Cui, Fig. 3.9: Wikipedia/Ansgar Walk, Fig. 3.10: The Whisker Chronicles, Fig. 3.11: Encyclopedia Britanica Inc., Fig. 3.12: Wikipedia, Fig. 3.13: Wikipedia/LadyofHats, Fig. 3.14: Wikipedia/J.J. Corneveaux, Fig. 3.15: Smithsonian Institution, Fig. 3.17: NRAO/AUI/NSF (left), Wikipedia/Colby Gutierrez-Kraybill (right), Fig. 3.18: Wikipedia/MikeRun, Fig. 3.20 - Fig. 3.23: Shueisha, Inc./Obara Takuya, Fig. 3.24: Wikipedia/Pinakpani, Fig. 3.25: Wikipedia/Dicklyon, Fig. 3.26: Wikipedia/Stannered (1st img), Dicklyon (2nd img), Morn the Gom (3rd img), Eequor (4th img), Fig. 3.27: M. Kitazawa/J. Plant Res., Fig. 3.28: S.R. Rahaman, Fig. 3.30: Jill Britton (pineapple), Fig. 3.32: Wikipedia/Farry (left), Wikimedia/Ivar Leidus (right).
References
1. The Creation of the Universe
제자원 (2002), Oxford Bible Encyclopedia, Bible Textbook Co., Genesis Chap. 1-11.
Another universe existed before Big Bang? 우주먼지의 현자타임즈, 2/24/2024, https://www.youtube.com/watch?v=RckLkaVzFe0
A Big Ring on The Sky: AAS 243rd Press conference. Alexia M. Lopez, 1/11/2024, https://www.youtube.com/watch?v=fwRJGaIcX6A
Bogdan, A., et al. (2024), “Evidence for heavy-seed origin of early supermassive black holes from a z ≈ 10 X-ray quasar”, Nature Astronomy, 8, 126.
Bonanno, A., & Fröhlich, H.-E. (2015), “A Bayesian estimation of the helioseismic solar age”, Astronomy & Astrophysics, 580, A130.
Karim, M. T., & Mamajek, E. E. (2017), “Revised geometric estimates of the North Galactic Pole and the sun's height above the Galactic mid-plane”, MNRAS, 465, 472.
Lopez, A. M., et al. (2022), “Giant Arc on the sky”, MNRAS, 516, 1557.
Lopez, A. M., Clowes, R. G., & Williger, G. M. (2024), “A Big Ring on the Sky”, JCAP, 07, 55.
Lyra, W., et al. (2023), “An Analytical Theory for the Growth from Planetesimals to Planets by Polydisperse Pebble Accretion”, The Astrophysical Journal, 946, 60.
Penrose, R. (2016), The Emperor’s New Mind, Oxford University Press, Oxford, United Kingdom.
Perotti, G., et al. (2023), “Water in the terrestrial planet-forming zone of the PDS 70 disk”, Nature, 620, 516.
Sandor, Zs., et al. (2024), “Planetesimal and planet formation in transient dust traps”, Astronomy & Astrophysics, in press.
Schiller, M., et al. (2020), “Iron isotope evidence for very rapid accretion and differentiation of the proto-earth”, Science Advances, 6, 7.
Tonelli, G. (2019), Genesis: The story of how everything began, Farrah, Straus and Giroux, New York, pp 19-44
Tryon, E. P. (1973), “Is the Universe a vacuum fluctuation”, Nature, 246, 396.
Vorobyov, E. I., et al. (2024), “Dust growth and pebble formation in the initial stages of protoplanetary disk evolution”, Astronomy & Astrophysics, 683, A202.
Yi, S., et al. (2001), “Toward Better Age Estimates for Stellar Populations: The Y2 Isochrones for Solar Mixture”, The Astrophysical Journal Supplement Series, 136, 417.
2. God's Masterpiece, the Earth
Comins, N. F. (1993), What If the Moon Didn't Exist? HarperCollins Publishers Inc., New York, NY.
Gonzalez, G. & Richards, J. W. (2004), The privileged planet: How Our Place in the Cosmos Is Designed for Discovery, Regnery Publishing, Inc.
Lineweaver, C. H., et al. (2004), “The Galactic Habitable Zone and the Age Distribution of Complex Life in the Milky Way”, Science, 303 (5654), 59.
Lüthi, D. et al. (2008), “High-resolution carbon dioxide concentration record 650,000 - 800,000 years before present”, Nature, 453, 379.
Narasimha, R., et al. (2023), “Making Habitable Worlds: Planets Versus Megastructures”, arXiv:2309.06562.
OpenAI. (2024), ChatGPT (4o) [Large language model], https://chatgpt.com
Ward, Peter D. & Brownlee, Donald (2000), Rare Earth: Why Complex Life is Uncommon in the Universe, Copernicus Books (Springer Verlag).
3. Creation or Evolution?
Abelson, P. H. (1966), “Chemical Events on the Primitive Earth”, Proc Nat Acad Sci, 55, 1365.
Behe, M. J. (2006). Darwin's black box: The biochemical challenge to evolution. Free Press.
Behe, M. J. (2020). Darwin devolves: The new science about DNA that challenges evolution. HarperOne.
Bernhardt, H. S. (2012), “The RNA world hypothesis: the worst theory of the early evolution of life (except for all the others)”, Biology Direct, 7, Article number: 23.
Chyba, C. F., & Sagan, C. (1992), “Endogenous production, exogenous delivery and impact-shock synthesis of organic molecules: An inventory for the origins of life”. Nature, 355, 125.
Cui, R., “The transcription network in skin tanning: from p53 to microphthalmia”, https://www.abcam.com/index.html?pageconfig=resource&rid=11180&pid=10026
Dembski, W. A., & Ewert, W. (2023). The design inference: Eliminating chance through small probabilities. Discovery Institute.
Danielson, M. (2020), “Simultaneous Determination of L- and D-Amino Acids in Proteins”, Foods, 9 (3), 309.
Fabre, J.-H. (2015), The Mason -Bees (Perfect Library), CreateSpace Independent Publishing Platform.
Higgins, M. (2014), “Bear evolution 101”, The Whisker Chronicles, https://thewhiskerchronicles.com/2014/01/03/bear-evolution-101/
Kasting, J. F. (1993). "Earth's Early Atmosphere." Science, 259(5097), 920.
Maslin, M. (2016), “Forty years of linking orbits to ice ages”, Nature, 540 (7632), 208.
Miller, S. L. (1953), “A Production of Amino Acids under Possible Primitive Earth Conditions”, Science, 117, 528
Mumma, M. M., et al. (1996), “Detection of Abundant Ethane and Methane, Along with Carbon Monoxide and Water, in Comet C/1996 B2 Hyakutake: Evidence for Interstellar Origin”, Science, 272 (5266), 1310.
OpenAI. (2024), ChatGPT (4o) [Large language model], https://chatgpt.com
Park, Chi Hoon (2024), “Stop codon points to GOD”, Proceedings of the 20th Anniversary KRAID Symposium
Pinto, J. P., Gladstone, G. R., & Yung, Y. L. (1980), “Photochemical Production of Formaldehyde in Earth’s Primitive Atmosphere”, Science, 210, 183.
Pinto, O. H., et al. (2022), “A Survey of CO, CO2, and H2O in Comets and Centaurs”, Planet. Sci. J., 3, 247.
Russo, D., et al. (2016), “Emerging trends and a comet taxonomy based on the volatile chemistry measured in thirty comets with high resolution infrared spectroscopy between 1997 and 2013”, Icarus, 278, 301.
Sanjuán, R., Moya, A., & Elena, S. F. (2004), “The distribution of fitness effects caused by single-nucleotide substitutions in an RNA virus”, Proc Natl Acad Sci, 101(22), 8396.
Trail, D., et al. (2011), “The oxidation state of Hadean magmas and implications for early Earth’s atmosphere”, Nature, 480, 79.
Urey, H. C. (1952). "On the Early Chemical History of the Earth and the Origin of Life." Proc Natl Acad Sci, 38(4), 351.
Wikipedia, Mutation (Distribution of fitness effects).
Wikipedia, Visual phototransduction.
Yang, P.-K. (2016), “How does Planck’s constant influence the macroscopic world?”, Eur. J. Phys., 37, 055406.
Zahnle, K. J. (1986), “Photochemistry of methane and the formation of hydrocyanic acid (HCN) in the Earth’s early atmosphere”, J. Geophys Res, 91, 2819.
About the Author
Dr. Dongchan Kim earned his B.S. in Astronomy from Yonsei University in Seoul, Korea, and his Ph.D. in Astronomy from the University of Hawaii. After completing his doctoral studies, he pursued astronomical research at several institutions, including NASA's Jet Propulsion Laboratory/Caltech, Seoul National University, and the University of Virginia.
Dr. Kim's research focuses on luminous infrared galaxies (LIRGs), ultraluminous infrared galaxies (ULIRGs), quasars, and recoiling supermassive black holes.
He is affiliated with the National Radio Astronomy Observatory in Charlottesville, Virginia, USA.
The Spanish version of this book was published under the title [GÉNESIS DIVINA: Explorando la Creación a través de la Astronomía y la Biología].